00:00:00.001 Started by upstream project "autotest-per-patch" build number 132039 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.214 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.282 > git --version # 'git version 2.39.2' 00:00:00.282 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.329 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.329 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.103 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.121 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.137 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:07.137 > git config core.sparsecheckout # timeout=10 00:00:07.150 > git read-tree -mu HEAD # timeout=10 00:00:07.168 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:07.186 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:07.186 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:07.272 [Pipeline] Start of Pipeline 00:00:07.286 [Pipeline] library 00:00:07.287 Loading library shm_lib@master 00:00:07.288 Library shm_lib@master is cached. Copying from home. 00:00:07.306 [Pipeline] node 00:00:07.322 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.323 [Pipeline] { 00:00:07.336 [Pipeline] catchError 00:00:07.338 [Pipeline] { 00:00:07.354 [Pipeline] wrap 00:00:07.361 [Pipeline] { 00:00:07.369 [Pipeline] stage 00:00:07.371 [Pipeline] { (Prologue) 00:00:07.600 [Pipeline] sh 00:00:07.883 + logger -p user.info -t JENKINS-CI 00:00:07.902 [Pipeline] echo 00:00:07.904 Node: WFP6 00:00:07.912 [Pipeline] sh 00:00:08.219 [Pipeline] setCustomBuildProperty 00:00:08.232 [Pipeline] echo 00:00:08.234 Cleanup processes 00:00:08.240 [Pipeline] sh 00:00:08.531 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.531 2538127 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.544 [Pipeline] sh 00:00:08.829 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.829 ++ grep -v 'sudo pgrep' 00:00:08.829 ++ awk '{print $1}' 00:00:08.829 + sudo kill -9 00:00:08.829 + true 00:00:08.844 [Pipeline] cleanWs 00:00:08.854 [WS-CLEANUP] Deleting project workspace... 00:00:08.854 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.861 [WS-CLEANUP] done 00:00:08.865 [Pipeline] setCustomBuildProperty 00:00:08.879 [Pipeline] sh 00:00:09.162 + sudo git config --global --replace-all safe.directory '*' 00:00:09.252 [Pipeline] httpRequest 00:00:09.727 [Pipeline] echo 00:00:09.729 Sorcerer 10.211.164.101 is alive 00:00:09.739 [Pipeline] retry 00:00:09.741 [Pipeline] { 00:00:09.755 [Pipeline] httpRequest 00:00:09.759 HttpMethod: GET 00:00:09.760 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:09.760 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:09.774 Response Code: HTTP/1.1 200 OK 00:00:09.774 Success: Status code 200 is in the accepted range: 200,404 00:00:09.774 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:11.524 [Pipeline] } 00:00:11.538 [Pipeline] // retry 00:00:11.544 [Pipeline] sh 00:00:11.828 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:11.844 [Pipeline] httpRequest 00:00:12.299 [Pipeline] echo 00:00:12.301 Sorcerer 10.211.164.101 is alive 00:00:12.310 [Pipeline] retry 00:00:12.312 [Pipeline] { 00:00:12.326 [Pipeline] httpRequest 00:00:12.331 HttpMethod: GET 00:00:12.331 URL: http://10.211.164.101/packages/spdk_018f4719671d44c0d31ffb2ec974b161919a7ca6.tar.gz 00:00:12.332 Sending request to url: http://10.211.164.101/packages/spdk_018f4719671d44c0d31ffb2ec974b161919a7ca6.tar.gz 00:00:12.334 Response Code: HTTP/1.1 200 OK 00:00:12.335 Success: Status code 200 is in the accepted range: 200,404 00:00:12.335 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_018f4719671d44c0d31ffb2ec974b161919a7ca6.tar.gz 00:00:29.710 [Pipeline] } 00:00:29.727 [Pipeline] // retry 00:00:29.735 [Pipeline] sh 00:00:30.019 + tar --no-same-owner -xf spdk_018f4719671d44c0d31ffb2ec974b161919a7ca6.tar.gz 00:00:32.563 [Pipeline] sh 00:00:32.847 + git -C spdk log --oneline -n5 00:00:32.847 018f47196 test/nvmf: Interrupt test for local pcie nvme device 00:00:32.847 84ba7a31c nvme/perf: interrupt mode support for pcie controller 00:00:32.847 924f1e4a7 test/scheduler: Account for multiple cpus in the affinity mask 00:00:32.847 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:00:32.847 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:00:32.859 [Pipeline] } 00:00:32.873 [Pipeline] // stage 00:00:32.882 [Pipeline] stage 00:00:32.884 [Pipeline] { (Prepare) 00:00:32.902 [Pipeline] writeFile 00:00:32.917 [Pipeline] sh 00:00:33.200 + logger -p user.info -t JENKINS-CI 00:00:33.211 [Pipeline] sh 00:00:33.492 + logger -p user.info -t JENKINS-CI 00:00:33.504 [Pipeline] sh 00:00:33.787 + cat autorun-spdk.conf 00:00:33.787 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.788 SPDK_TEST_NVMF=1 00:00:33.788 SPDK_TEST_NVME_CLI=1 00:00:33.788 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.788 SPDK_TEST_NVMF_NICS=e810 00:00:33.788 SPDK_TEST_VFIOUSER=1 00:00:33.788 SPDK_RUN_UBSAN=1 00:00:33.788 NET_TYPE=phy 00:00:33.795 RUN_NIGHTLY=0 00:00:33.800 [Pipeline] readFile 00:00:33.824 [Pipeline] withEnv 00:00:33.827 [Pipeline] { 00:00:33.839 [Pipeline] sh 00:00:34.123 + set -ex 00:00:34.123 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:34.123 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.123 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.123 ++ SPDK_TEST_NVMF=1 00:00:34.123 ++ SPDK_TEST_NVME_CLI=1 00:00:34.123 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.123 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.123 ++ SPDK_TEST_VFIOUSER=1 00:00:34.123 ++ SPDK_RUN_UBSAN=1 00:00:34.123 ++ NET_TYPE=phy 00:00:34.123 ++ RUN_NIGHTLY=0 00:00:34.123 + case $SPDK_TEST_NVMF_NICS in 00:00:34.123 + DRIVERS=ice 00:00:34.123 + [[ tcp == \r\d\m\a ]] 00:00:34.123 + [[ -n ice ]] 00:00:34.123 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:34.124 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:34.124 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:34.124 rmmod: ERROR: Module irdma is not currently loaded 00:00:34.124 rmmod: ERROR: Module i40iw is not currently loaded 00:00:34.124 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:34.124 + true 00:00:34.124 + for D in $DRIVERS 00:00:34.124 + sudo modprobe ice 00:00:34.124 + exit 0 00:00:34.132 [Pipeline] } 00:00:34.147 [Pipeline] // withEnv 00:00:34.153 [Pipeline] } 00:00:34.166 [Pipeline] // stage 00:00:34.176 [Pipeline] catchError 00:00:34.177 [Pipeline] { 00:00:34.191 [Pipeline] timeout 00:00:34.191 Timeout set to expire in 1 hr 0 min 00:00:34.192 [Pipeline] { 00:00:34.207 [Pipeline] stage 00:00:34.209 [Pipeline] { (Tests) 00:00:34.223 [Pipeline] sh 00:00:34.508 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.508 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.508 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.508 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:34.508 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.508 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.508 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:34.508 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.508 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.508 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.508 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:34.508 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.508 + source /etc/os-release 00:00:34.508 ++ NAME='Fedora Linux' 00:00:34.508 ++ VERSION='39 (Cloud Edition)' 00:00:34.508 ++ ID=fedora 00:00:34.508 ++ VERSION_ID=39 00:00:34.508 ++ VERSION_CODENAME= 00:00:34.508 ++ PLATFORM_ID=platform:f39 00:00:34.508 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:34.508 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:34.508 ++ LOGO=fedora-logo-icon 00:00:34.508 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:34.508 ++ HOME_URL=https://fedoraproject.org/ 00:00:34.508 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:34.508 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:34.508 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:34.508 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:34.508 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:34.508 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:34.508 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:34.508 ++ SUPPORT_END=2024-11-12 00:00:34.508 ++ VARIANT='Cloud Edition' 00:00:34.508 ++ VARIANT_ID=cloud 00:00:34.508 + uname -a 00:00:34.508 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:34.508 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:37.042 Hugepages 00:00:37.042 node hugesize free / total 00:00:37.042 node0 1048576kB 0 / 0 00:00:37.042 node0 2048kB 0 / 0 00:00:37.042 node1 1048576kB 0 / 0 00:00:37.042 node1 2048kB 0 / 0 00:00:37.042 00:00:37.042 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:37.042 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:37.042 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:37.042 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:37.042 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:37.042 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:37.042 + rm -f /tmp/spdk-ld-path 00:00:37.043 + source autorun-spdk.conf 00:00:37.043 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.043 ++ SPDK_TEST_NVMF=1 00:00:37.043 ++ SPDK_TEST_NVME_CLI=1 00:00:37.043 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.043 ++ SPDK_TEST_NVMF_NICS=e810 00:00:37.043 ++ SPDK_TEST_VFIOUSER=1 00:00:37.043 ++ SPDK_RUN_UBSAN=1 00:00:37.043 ++ NET_TYPE=phy 00:00:37.043 ++ RUN_NIGHTLY=0 00:00:37.043 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:37.043 + [[ -n '' ]] 00:00:37.043 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.043 + for M in /var/spdk/build-*-manifest.txt 00:00:37.043 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:37.043 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:37.043 + for M in /var/spdk/build-*-manifest.txt 00:00:37.043 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:37.043 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:37.043 + for M in /var/spdk/build-*-manifest.txt 00:00:37.043 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:37.043 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:37.043 ++ uname 00:00:37.043 + [[ Linux == \L\i\n\u\x ]] 00:00:37.043 + sudo dmesg -T 00:00:37.043 + sudo dmesg --clear 00:00:37.043 + dmesg_pid=2539174 00:00:37.043 + [[ Fedora Linux == FreeBSD ]] 00:00:37.043 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.043 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.043 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:37.043 + [[ -x /usr/src/fio-static/fio ]] 00:00:37.043 + export FIO_BIN=/usr/src/fio-static/fio 00:00:37.043 + FIO_BIN=/usr/src/fio-static/fio 00:00:37.043 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:37.043 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:37.043 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:37.043 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.043 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.043 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:37.043 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.043 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.043 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.043 + sudo dmesg -Tw 00:00:37.043 16:12:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:37.043 16:12:03 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:37.043 16:12:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:37.043 16:12:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:37.043 16:12:03 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.303 16:12:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:37.303 16:12:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:37.303 16:12:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:37.303 16:12:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:37.303 16:12:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:37.303 16:12:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:37.303 16:12:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.303 16:12:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.303 16:12:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.303 16:12:03 -- paths/export.sh@5 -- $ export PATH 00:00:37.303 16:12:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.303 16:12:03 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:37.303 16:12:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:37.303 16:12:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730733123.XXXXXX 00:00:37.303 16:12:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730733123.LcfWjr 00:00:37.303 16:12:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:37.303 16:12:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:37.303 16:12:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:37.303 16:12:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:37.303 16:12:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:37.303 16:12:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:37.303 16:12:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:37.303 16:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.303 16:12:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:37.303 16:12:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:37.303 16:12:03 -- pm/common@17 -- $ local monitor 00:00:37.303 16:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.303 16:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.303 16:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.303 16:12:03 -- pm/common@21 -- $ date +%s 00:00:37.303 16:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.303 16:12:03 -- pm/common@21 -- $ date +%s 00:00:37.303 16:12:03 -- pm/common@21 -- $ date +%s 00:00:37.303 16:12:03 -- pm/common@25 -- $ sleep 1 00:00:37.303 16:12:03 -- pm/common@21 -- $ date +%s 00:00:37.303 16:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730733123 00:00:37.303 16:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730733123 00:00:37.303 16:12:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730733123 00:00:37.303 16:12:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730733123 00:00:37.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730733123_collect-vmstat.pm.log 00:00:37.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730733123_collect-cpu-load.pm.log 00:00:37.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730733123_collect-cpu-temp.pm.log 00:00:37.303 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730733123_collect-bmc-pm.bmc.pm.log 00:00:38.242 16:12:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:38.242 16:12:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:38.242 16:12:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:38.242 16:12:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.242 16:12:04 -- spdk/autobuild.sh@16 -- $ date -u 00:00:38.242 Mon Nov 4 03:12:04 PM UTC 2024 00:00:38.242 16:12:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:38.242 v25.01-pre-161-g018f47196 00:00:38.242 16:12:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:38.242 16:12:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:38.242 16:12:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:38.242 16:12:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:38.242 16:12:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:38.242 16:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.242 ************************************ 00:00:38.242 START TEST ubsan 00:00:38.242 ************************************ 00:00:38.242 16:12:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:38.242 using ubsan 00:00:38.242 00:00:38.242 real 0m0.000s 00:00:38.242 user 0m0.000s 00:00:38.242 sys 0m0.000s 00:00:38.242 16:12:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:38.242 16:12:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:38.242 ************************************ 00:00:38.242 END TEST ubsan 00:00:38.242 ************************************ 00:00:38.242 16:12:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.242 16:12:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.242 16:12:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:38.242 16:12:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:38.500 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:38.500 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:38.758 Using 'verbs' RDMA provider 00:00:51.903 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:01.882 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.142 Creating mk/config.mk...done. 00:01:02.142 Creating mk/cc.flags.mk...done. 00:01:02.142 Type 'make' to build. 00:01:02.142 16:12:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:02.142 16:12:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.142 16:12:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.142 16:12:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.142 ************************************ 00:01:02.142 START TEST make 00:01:02.142 ************************************ 00:01:02.142 16:12:28 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:02.401 make[1]: Nothing to be done for 'all'. 00:01:03.780 The Meson build system 00:01:03.780 Version: 1.5.0 00:01:03.780 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:03.780 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.780 Build type: native build 00:01:03.780 Project name: libvfio-user 00:01:03.780 Project version: 0.0.1 00:01:03.780 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:03.780 C linker for the host machine: cc ld.bfd 2.40-14 00:01:03.780 Host machine cpu family: x86_64 00:01:03.780 Host machine cpu: x86_64 00:01:03.780 Run-time dependency threads found: YES 00:01:03.780 Library dl found: YES 00:01:03.780 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:03.780 Run-time dependency json-c found: YES 0.17 00:01:03.780 Run-time dependency cmocka found: YES 1.1.7 00:01:03.780 Program pytest-3 found: NO 00:01:03.780 Program flake8 found: NO 00:01:03.780 Program misspell-fixer found: NO 00:01:03.780 Program restructuredtext-lint found: NO 00:01:03.780 Program valgrind found: YES (/usr/bin/valgrind) 00:01:03.780 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.780 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.780 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.780 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.780 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:03.780 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:03.780 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:03.780 Build targets in project: 8 00:01:03.780 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:03.780 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:03.780 00:01:03.780 libvfio-user 0.0.1 00:01:03.780 00:01:03.780 User defined options 00:01:03.780 buildtype : debug 00:01:03.780 default_library: shared 00:01:03.780 libdir : /usr/local/lib 00:01:03.780 00:01:03.780 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.719 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.719 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:04.719 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:04.719 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:04.719 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:04.719 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:04.719 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:04.719 [7/37] Compiling C object samples/null.p/null.c.o 00:01:04.719 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:04.719 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:04.719 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:04.719 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:04.719 [12/37] Compiling C object samples/server.p/server.c.o 00:01:04.719 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:04.719 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:04.719 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:04.719 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:04.719 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:04.719 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:04.719 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:04.719 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:04.719 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:04.719 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:04.719 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:04.719 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:04.719 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:04.719 [26/37] Compiling C object samples/client.p/client.c.o 00:01:04.719 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:04.719 [28/37] Linking target samples/client 00:01:04.719 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:04.719 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:04.719 [31/37] Linking target test/unit_tests 00:01:04.977 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:04.977 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:04.977 [34/37] Linking target samples/server 00:01:04.977 [35/37] Linking target samples/gpio-pci-idio-16 00:01:04.977 [36/37] Linking target samples/null 00:01:04.977 [37/37] Linking target samples/lspci 00:01:04.977 INFO: autodetecting backend as ninja 00:01:04.977 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.977 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.244 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:05.244 ninja: no work to do. 00:01:10.562 The Meson build system 00:01:10.562 Version: 1.5.0 00:01:10.562 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:10.562 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:10.562 Build type: native build 00:01:10.562 Program cat found: YES (/usr/bin/cat) 00:01:10.562 Project name: DPDK 00:01:10.562 Project version: 24.03.0 00:01:10.562 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:10.562 C linker for the host machine: cc ld.bfd 2.40-14 00:01:10.562 Host machine cpu family: x86_64 00:01:10.562 Host machine cpu: x86_64 00:01:10.562 Message: ## Building in Developer Mode ## 00:01:10.562 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:10.562 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:10.562 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:10.562 Program python3 found: YES (/usr/bin/python3) 00:01:10.562 Program cat found: YES (/usr/bin/cat) 00:01:10.562 Compiler for C supports arguments -march=native: YES 00:01:10.562 Checking for size of "void *" : 8 00:01:10.562 Checking for size of "void *" : 8 (cached) 00:01:10.562 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:10.562 Library m found: YES 00:01:10.562 Library numa found: YES 00:01:10.562 Has header "numaif.h" : YES 00:01:10.562 Library fdt found: NO 00:01:10.562 Library execinfo found: NO 00:01:10.562 Has header "execinfo.h" : YES 00:01:10.562 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:10.562 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:10.562 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:10.562 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:10.562 Run-time dependency openssl found: YES 3.1.1 00:01:10.562 Run-time dependency libpcap found: YES 1.10.4 00:01:10.562 Has header "pcap.h" with dependency libpcap: YES 00:01:10.562 Compiler for C supports arguments -Wcast-qual: YES 00:01:10.562 Compiler for C supports arguments -Wdeprecated: YES 00:01:10.562 Compiler for C supports arguments -Wformat: YES 00:01:10.562 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:10.562 Compiler for C supports arguments -Wformat-security: NO 00:01:10.562 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:10.562 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:10.562 Compiler for C supports arguments -Wnested-externs: YES 00:01:10.562 Compiler for C supports arguments -Wold-style-definition: YES 00:01:10.562 Compiler for C supports arguments -Wpointer-arith: YES 00:01:10.562 Compiler for C supports arguments -Wsign-compare: YES 00:01:10.562 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:10.562 Compiler for C supports arguments -Wundef: YES 00:01:10.562 Compiler for C supports arguments -Wwrite-strings: YES 00:01:10.562 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:10.562 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:10.562 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:10.562 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:10.562 Program objdump found: YES (/usr/bin/objdump) 00:01:10.562 Compiler for C supports arguments -mavx512f: YES 00:01:10.562 Checking if "AVX512 checking" compiles: YES 00:01:10.562 Fetching value of define "__SSE4_2__" : 1 00:01:10.562 Fetching value of define "__AES__" : 1 00:01:10.562 Fetching value of define "__AVX__" : 1 00:01:10.562 Fetching value of define "__AVX2__" : 1 00:01:10.562 Fetching value of define "__AVX512BW__" : 1 00:01:10.562 Fetching value of define "__AVX512CD__" : 1 00:01:10.562 Fetching value of define "__AVX512DQ__" : 1 00:01:10.562 Fetching value of define "__AVX512F__" : 1 00:01:10.562 Fetching value of define "__AVX512VL__" : 1 00:01:10.562 Fetching value of define "__PCLMUL__" : 1 00:01:10.562 Fetching value of define "__RDRND__" : 1 00:01:10.562 Fetching value of define "__RDSEED__" : 1 00:01:10.562 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:10.562 Fetching value of define "__znver1__" : (undefined) 00:01:10.562 Fetching value of define "__znver2__" : (undefined) 00:01:10.562 Fetching value of define "__znver3__" : (undefined) 00:01:10.562 Fetching value of define "__znver4__" : (undefined) 00:01:10.562 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:10.562 Message: lib/log: Defining dependency "log" 00:01:10.562 Message: lib/kvargs: Defining dependency "kvargs" 00:01:10.562 Message: lib/telemetry: Defining dependency "telemetry" 00:01:10.562 Checking for function "getentropy" : NO 00:01:10.562 Message: lib/eal: Defining dependency "eal" 00:01:10.562 Message: lib/ring: Defining dependency "ring" 00:01:10.562 Message: lib/rcu: Defining dependency "rcu" 00:01:10.562 Message: lib/mempool: Defining dependency "mempool" 00:01:10.562 Message: lib/mbuf: Defining dependency "mbuf" 00:01:10.562 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:10.562 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:10.562 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:10.562 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:10.562 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:10.562 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:10.562 Compiler for C supports arguments -mpclmul: YES 00:01:10.562 Compiler for C supports arguments -maes: YES 00:01:10.562 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:10.562 Compiler for C supports arguments -mavx512bw: YES 00:01:10.562 Compiler for C supports arguments -mavx512dq: YES 00:01:10.562 Compiler for C supports arguments -mavx512vl: YES 00:01:10.562 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:10.562 Compiler for C supports arguments -mavx2: YES 00:01:10.562 Compiler for C supports arguments -mavx: YES 00:01:10.562 Message: lib/net: Defining dependency "net" 00:01:10.562 Message: lib/meter: Defining dependency "meter" 00:01:10.562 Message: lib/ethdev: Defining dependency "ethdev" 00:01:10.562 Message: lib/pci: Defining dependency "pci" 00:01:10.562 Message: lib/cmdline: Defining dependency "cmdline" 00:01:10.562 Message: lib/hash: Defining dependency "hash" 00:01:10.562 Message: lib/timer: Defining dependency "timer" 00:01:10.562 Message: lib/compressdev: Defining dependency "compressdev" 00:01:10.562 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:10.562 Message: lib/dmadev: Defining dependency "dmadev" 00:01:10.562 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:10.562 Message: lib/power: Defining dependency "power" 00:01:10.562 Message: lib/reorder: Defining dependency "reorder" 00:01:10.562 Message: lib/security: Defining dependency "security" 00:01:10.562 Has header "linux/userfaultfd.h" : YES 00:01:10.562 Has header "linux/vduse.h" : YES 00:01:10.562 Message: lib/vhost: Defining dependency "vhost" 00:01:10.562 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:10.562 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:10.562 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:10.562 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:10.562 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:10.562 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:10.562 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:10.562 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:10.562 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:10.562 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:10.562 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:10.562 Configuring doxy-api-html.conf using configuration 00:01:10.562 Configuring doxy-api-man.conf using configuration 00:01:10.562 Program mandb found: YES (/usr/bin/mandb) 00:01:10.562 Program sphinx-build found: NO 00:01:10.562 Configuring rte_build_config.h using configuration 00:01:10.562 Message: 00:01:10.563 ================= 00:01:10.563 Applications Enabled 00:01:10.563 ================= 00:01:10.563 00:01:10.563 apps: 00:01:10.563 00:01:10.563 00:01:10.563 Message: 00:01:10.563 ================= 00:01:10.563 Libraries Enabled 00:01:10.563 ================= 00:01:10.563 00:01:10.563 libs: 00:01:10.563 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:10.563 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:10.563 cryptodev, dmadev, power, reorder, security, vhost, 00:01:10.563 00:01:10.563 Message: 00:01:10.563 =============== 00:01:10.563 Drivers Enabled 00:01:10.563 =============== 00:01:10.563 00:01:10.563 common: 00:01:10.563 00:01:10.563 bus: 00:01:10.563 pci, vdev, 00:01:10.563 mempool: 00:01:10.563 ring, 00:01:10.563 dma: 00:01:10.563 00:01:10.563 net: 00:01:10.563 00:01:10.563 crypto: 00:01:10.563 00:01:10.563 compress: 00:01:10.563 00:01:10.563 vdpa: 00:01:10.563 00:01:10.563 00:01:10.563 Message: 00:01:10.563 ================= 00:01:10.563 Content Skipped 00:01:10.563 ================= 00:01:10.563 00:01:10.563 apps: 00:01:10.563 dumpcap: explicitly disabled via build config 00:01:10.563 graph: explicitly disabled via build config 00:01:10.563 pdump: explicitly disabled via build config 00:01:10.563 proc-info: explicitly disabled via build config 00:01:10.563 test-acl: explicitly disabled via build config 00:01:10.563 test-bbdev: explicitly disabled via build config 00:01:10.563 test-cmdline: explicitly disabled via build config 00:01:10.563 test-compress-perf: explicitly disabled via build config 00:01:10.563 test-crypto-perf: explicitly disabled via build config 00:01:10.563 test-dma-perf: explicitly disabled via build config 00:01:10.563 test-eventdev: explicitly disabled via build config 00:01:10.563 test-fib: explicitly disabled via build config 00:01:10.563 test-flow-perf: explicitly disabled via build config 00:01:10.563 test-gpudev: explicitly disabled via build config 00:01:10.563 test-mldev: explicitly disabled via build config 00:01:10.563 test-pipeline: explicitly disabled via build config 00:01:10.563 test-pmd: explicitly disabled via build config 00:01:10.563 test-regex: explicitly disabled via build config 00:01:10.563 test-sad: explicitly disabled via build config 00:01:10.563 test-security-perf: explicitly disabled via build config 00:01:10.563 00:01:10.563 libs: 00:01:10.563 argparse: explicitly disabled via build config 00:01:10.563 metrics: explicitly disabled via build config 00:01:10.563 acl: explicitly disabled via build config 00:01:10.563 bbdev: explicitly disabled via build config 00:01:10.563 bitratestats: explicitly disabled via build config 00:01:10.563 bpf: explicitly disabled via build config 00:01:10.563 cfgfile: explicitly disabled via build config 00:01:10.563 distributor: explicitly disabled via build config 00:01:10.563 efd: explicitly disabled via build config 00:01:10.563 eventdev: explicitly disabled via build config 00:01:10.563 dispatcher: explicitly disabled via build config 00:01:10.563 gpudev: explicitly disabled via build config 00:01:10.563 gro: explicitly disabled via build config 00:01:10.563 gso: explicitly disabled via build config 00:01:10.563 ip_frag: explicitly disabled via build config 00:01:10.563 jobstats: explicitly disabled via build config 00:01:10.563 latencystats: explicitly disabled via build config 00:01:10.563 lpm: explicitly disabled via build config 00:01:10.563 member: explicitly disabled via build config 00:01:10.563 pcapng: explicitly disabled via build config 00:01:10.563 rawdev: explicitly disabled via build config 00:01:10.563 regexdev: explicitly disabled via build config 00:01:10.563 mldev: explicitly disabled via build config 00:01:10.563 rib: explicitly disabled via build config 00:01:10.563 sched: explicitly disabled via build config 00:01:10.563 stack: explicitly disabled via build config 00:01:10.563 ipsec: explicitly disabled via build config 00:01:10.563 pdcp: explicitly disabled via build config 00:01:10.563 fib: explicitly disabled via build config 00:01:10.563 port: explicitly disabled via build config 00:01:10.563 pdump: explicitly disabled via build config 00:01:10.563 table: explicitly disabled via build config 00:01:10.563 pipeline: explicitly disabled via build config 00:01:10.563 graph: explicitly disabled via build config 00:01:10.563 node: explicitly disabled via build config 00:01:10.563 00:01:10.563 drivers: 00:01:10.563 common/cpt: not in enabled drivers build config 00:01:10.563 common/dpaax: not in enabled drivers build config 00:01:10.563 common/iavf: not in enabled drivers build config 00:01:10.563 common/idpf: not in enabled drivers build config 00:01:10.563 common/ionic: not in enabled drivers build config 00:01:10.563 common/mvep: not in enabled drivers build config 00:01:10.563 common/octeontx: not in enabled drivers build config 00:01:10.563 bus/auxiliary: not in enabled drivers build config 00:01:10.563 bus/cdx: not in enabled drivers build config 00:01:10.563 bus/dpaa: not in enabled drivers build config 00:01:10.563 bus/fslmc: not in enabled drivers build config 00:01:10.563 bus/ifpga: not in enabled drivers build config 00:01:10.563 bus/platform: not in enabled drivers build config 00:01:10.563 bus/uacce: not in enabled drivers build config 00:01:10.563 bus/vmbus: not in enabled drivers build config 00:01:10.563 common/cnxk: not in enabled drivers build config 00:01:10.563 common/mlx5: not in enabled drivers build config 00:01:10.563 common/nfp: not in enabled drivers build config 00:01:10.563 common/nitrox: not in enabled drivers build config 00:01:10.563 common/qat: not in enabled drivers build config 00:01:10.563 common/sfc_efx: not in enabled drivers build config 00:01:10.563 mempool/bucket: not in enabled drivers build config 00:01:10.563 mempool/cnxk: not in enabled drivers build config 00:01:10.563 mempool/dpaa: not in enabled drivers build config 00:01:10.563 mempool/dpaa2: not in enabled drivers build config 00:01:10.563 mempool/octeontx: not in enabled drivers build config 00:01:10.563 mempool/stack: not in enabled drivers build config 00:01:10.563 dma/cnxk: not in enabled drivers build config 00:01:10.563 dma/dpaa: not in enabled drivers build config 00:01:10.563 dma/dpaa2: not in enabled drivers build config 00:01:10.563 dma/hisilicon: not in enabled drivers build config 00:01:10.563 dma/idxd: not in enabled drivers build config 00:01:10.563 dma/ioat: not in enabled drivers build config 00:01:10.563 dma/skeleton: not in enabled drivers build config 00:01:10.563 net/af_packet: not in enabled drivers build config 00:01:10.563 net/af_xdp: not in enabled drivers build config 00:01:10.563 net/ark: not in enabled drivers build config 00:01:10.563 net/atlantic: not in enabled drivers build config 00:01:10.563 net/avp: not in enabled drivers build config 00:01:10.563 net/axgbe: not in enabled drivers build config 00:01:10.563 net/bnx2x: not in enabled drivers build config 00:01:10.563 net/bnxt: not in enabled drivers build config 00:01:10.563 net/bonding: not in enabled drivers build config 00:01:10.563 net/cnxk: not in enabled drivers build config 00:01:10.563 net/cpfl: not in enabled drivers build config 00:01:10.563 net/cxgbe: not in enabled drivers build config 00:01:10.563 net/dpaa: not in enabled drivers build config 00:01:10.563 net/dpaa2: not in enabled drivers build config 00:01:10.563 net/e1000: not in enabled drivers build config 00:01:10.563 net/ena: not in enabled drivers build config 00:01:10.563 net/enetc: not in enabled drivers build config 00:01:10.563 net/enetfec: not in enabled drivers build config 00:01:10.563 net/enic: not in enabled drivers build config 00:01:10.563 net/failsafe: not in enabled drivers build config 00:01:10.563 net/fm10k: not in enabled drivers build config 00:01:10.563 net/gve: not in enabled drivers build config 00:01:10.563 net/hinic: not in enabled drivers build config 00:01:10.563 net/hns3: not in enabled drivers build config 00:01:10.563 net/i40e: not in enabled drivers build config 00:01:10.563 net/iavf: not in enabled drivers build config 00:01:10.563 net/ice: not in enabled drivers build config 00:01:10.563 net/idpf: not in enabled drivers build config 00:01:10.563 net/igc: not in enabled drivers build config 00:01:10.563 net/ionic: not in enabled drivers build config 00:01:10.563 net/ipn3ke: not in enabled drivers build config 00:01:10.563 net/ixgbe: not in enabled drivers build config 00:01:10.563 net/mana: not in enabled drivers build config 00:01:10.563 net/memif: not in enabled drivers build config 00:01:10.563 net/mlx4: not in enabled drivers build config 00:01:10.563 net/mlx5: not in enabled drivers build config 00:01:10.563 net/mvneta: not in enabled drivers build config 00:01:10.563 net/mvpp2: not in enabled drivers build config 00:01:10.563 net/netvsc: not in enabled drivers build config 00:01:10.563 net/nfb: not in enabled drivers build config 00:01:10.563 net/nfp: not in enabled drivers build config 00:01:10.563 net/ngbe: not in enabled drivers build config 00:01:10.563 net/null: not in enabled drivers build config 00:01:10.563 net/octeontx: not in enabled drivers build config 00:01:10.563 net/octeon_ep: not in enabled drivers build config 00:01:10.563 net/pcap: not in enabled drivers build config 00:01:10.563 net/pfe: not in enabled drivers build config 00:01:10.563 net/qede: not in enabled drivers build config 00:01:10.563 net/ring: not in enabled drivers build config 00:01:10.563 net/sfc: not in enabled drivers build config 00:01:10.563 net/softnic: not in enabled drivers build config 00:01:10.563 net/tap: not in enabled drivers build config 00:01:10.563 net/thunderx: not in enabled drivers build config 00:01:10.563 net/txgbe: not in enabled drivers build config 00:01:10.563 net/vdev_netvsc: not in enabled drivers build config 00:01:10.563 net/vhost: not in enabled drivers build config 00:01:10.563 net/virtio: not in enabled drivers build config 00:01:10.563 net/vmxnet3: not in enabled drivers build config 00:01:10.563 raw/*: missing internal dependency, "rawdev" 00:01:10.563 crypto/armv8: not in enabled drivers build config 00:01:10.563 crypto/bcmfs: not in enabled drivers build config 00:01:10.563 crypto/caam_jr: not in enabled drivers build config 00:01:10.563 crypto/ccp: not in enabled drivers build config 00:01:10.563 crypto/cnxk: not in enabled drivers build config 00:01:10.564 crypto/dpaa_sec: not in enabled drivers build config 00:01:10.564 crypto/dpaa2_sec: not in enabled drivers build config 00:01:10.564 crypto/ipsec_mb: not in enabled drivers build config 00:01:10.564 crypto/mlx5: not in enabled drivers build config 00:01:10.564 crypto/mvsam: not in enabled drivers build config 00:01:10.564 crypto/nitrox: not in enabled drivers build config 00:01:10.564 crypto/null: not in enabled drivers build config 00:01:10.564 crypto/octeontx: not in enabled drivers build config 00:01:10.564 crypto/openssl: not in enabled drivers build config 00:01:10.564 crypto/scheduler: not in enabled drivers build config 00:01:10.564 crypto/uadk: not in enabled drivers build config 00:01:10.564 crypto/virtio: not in enabled drivers build config 00:01:10.564 compress/isal: not in enabled drivers build config 00:01:10.564 compress/mlx5: not in enabled drivers build config 00:01:10.564 compress/nitrox: not in enabled drivers build config 00:01:10.564 compress/octeontx: not in enabled drivers build config 00:01:10.564 compress/zlib: not in enabled drivers build config 00:01:10.564 regex/*: missing internal dependency, "regexdev" 00:01:10.564 ml/*: missing internal dependency, "mldev" 00:01:10.564 vdpa/ifc: not in enabled drivers build config 00:01:10.564 vdpa/mlx5: not in enabled drivers build config 00:01:10.564 vdpa/nfp: not in enabled drivers build config 00:01:10.564 vdpa/sfc: not in enabled drivers build config 00:01:10.564 event/*: missing internal dependency, "eventdev" 00:01:10.564 baseband/*: missing internal dependency, "bbdev" 00:01:10.564 gpu/*: missing internal dependency, "gpudev" 00:01:10.564 00:01:10.564 00:01:10.564 Build targets in project: 85 00:01:10.564 00:01:10.564 DPDK 24.03.0 00:01:10.564 00:01:10.564 User defined options 00:01:10.564 buildtype : debug 00:01:10.564 default_library : shared 00:01:10.564 libdir : lib 00:01:10.564 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:10.564 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:10.564 c_link_args : 00:01:10.564 cpu_instruction_set: native 00:01:10.564 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:10.564 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:10.564 enable_docs : false 00:01:10.564 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:10.564 enable_kmods : false 00:01:10.564 max_lcores : 128 00:01:10.564 tests : false 00:01:10.564 00:01:10.564 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.142 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.142 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.142 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.142 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.142 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.142 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.142 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.142 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.142 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:11.404 [9/268] Linking static target lib/librte_kvargs.a 00:01:11.404 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.404 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:11.404 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.404 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.404 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.404 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:11.404 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:11.404 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.404 [18/268] Linking static target lib/librte_log.a 00:01:11.404 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:11.404 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:11.404 [21/268] Linking static target lib/librte_pci.a 00:01:11.404 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:11.404 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:11.668 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:11.668 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:11.668 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:11.668 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:11.668 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:11.668 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:11.668 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:11.668 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:11.668 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:11.668 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:11.668 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:11.668 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:11.668 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:11.668 [37/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:11.668 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:11.668 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:11.668 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:11.668 [41/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:11.668 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:11.668 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:11.668 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:11.668 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:11.668 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:11.668 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:11.668 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:11.668 [49/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:11.668 [50/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:11.668 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:11.668 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:11.668 [53/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:11.668 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:11.668 [55/268] Linking static target lib/librte_meter.a 00:01:11.668 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:11.668 [57/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:11.668 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:11.668 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:11.929 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:11.929 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:11.929 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:11.929 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:11.929 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:11.929 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:11.929 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:11.929 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:11.929 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:11.929 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:11.929 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:11.929 [71/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.929 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:11.929 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:11.929 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:11.929 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:11.929 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:11.929 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:11.929 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:11.929 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:11.929 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:11.929 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:11.929 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:11.929 [83/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:11.929 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:11.929 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:11.929 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:11.929 [87/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:11.929 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:11.929 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:11.929 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:11.929 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:11.929 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:11.929 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:11.929 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:11.929 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:11.929 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:11.929 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:11.929 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:11.929 [99/268] Linking static target lib/librte_ring.a 00:01:11.929 [100/268] Linking static target lib/librte_telemetry.a 00:01:11.929 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.929 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:11.929 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:11.929 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:11.929 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.929 [106/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.929 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:11.929 [108/268] Linking static target lib/librte_net.a 00:01:11.929 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.929 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:11.929 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:11.930 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.930 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:11.930 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:11.930 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:11.930 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:11.930 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:11.930 [118/268] Linking static target lib/librte_rcu.a 00:01:11.930 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.930 [120/268] Linking static target lib/librte_mempool.a 00:01:11.930 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:11.930 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:11.930 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:11.930 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:11.930 [125/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.930 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:11.930 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:11.930 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:11.930 [129/268] Linking static target lib/librte_eal.a 00:01:11.930 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.188 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.188 [132/268] Linking static target lib/librte_cmdline.a 00:01:12.188 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.188 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.188 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.188 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.188 [137/268] Linking target lib/librte_log.so.24.1 00:01:12.188 [138/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.188 [139/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.188 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.188 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.188 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.188 [143/268] Linking static target lib/librte_mbuf.a 00:01:12.188 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.188 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.188 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.188 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:12.188 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.188 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.188 [150/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.188 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.188 [152/268] Linking static target lib/librte_timer.a 00:01:12.188 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.188 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.188 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.188 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.188 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.188 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.188 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.188 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.188 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.188 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:12.188 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.188 [164/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.188 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.446 [166/268] Linking target lib/librte_kvargs.so.24.1 00:01:12.446 [167/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.446 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.446 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.446 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:12.446 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.446 [172/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.446 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.446 [174/268] Linking target lib/librte_telemetry.so.24.1 00:01:12.446 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.446 [176/268] Linking static target lib/librte_security.a 00:01:12.446 [177/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.447 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.447 [179/268] Linking static target lib/librte_dmadev.a 00:01:12.447 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.447 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.447 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.447 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.447 [184/268] Linking static target lib/librte_compressdev.a 00:01:12.447 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.447 [186/268] Linking static target lib/librte_power.a 00:01:12.447 [187/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:12.447 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.447 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.447 [190/268] Linking static target lib/librte_reorder.a 00:01:12.447 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.447 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.447 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.447 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:12.447 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.447 [196/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.705 [197/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.705 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.705 [199/268] Linking static target drivers/librte_bus_pci.a 00:01:12.705 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.705 [201/268] Linking static target lib/librte_hash.a 00:01:12.705 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.705 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.705 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.705 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:12.705 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.705 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.705 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.705 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:12.705 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.705 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.705 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:12.963 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.963 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.963 [215/268] Linking static target lib/librte_cryptodev.a 00:01:12.963 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.963 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.963 [218/268] Linking static target lib/librte_ethdev.a 00:01:12.963 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.963 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.963 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.220 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.220 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.220 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.220 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.220 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.478 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.411 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:14.412 [229/268] Linking static target lib/librte_vhost.a 00:01:14.670 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.569 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.753 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.319 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.577 [234/268] Linking target lib/librte_eal.so.24.1 00:01:21.577 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:21.577 [236/268] Linking target lib/librte_ring.so.24.1 00:01:21.577 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:21.577 [238/268] Linking target lib/librte_pci.so.24.1 00:01:21.577 [239/268] Linking target lib/librte_meter.so.24.1 00:01:21.577 [240/268] Linking target lib/librte_timer.so.24.1 00:01:21.577 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:21.834 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:21.835 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:21.835 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:21.835 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:21.835 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:21.835 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:21.835 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:21.835 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:21.835 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:21.835 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:22.093 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:22.093 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:22.093 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:22.093 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:22.093 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:22.093 [257/268] Linking target lib/librte_net.so.24.1 00:01:22.093 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:22.351 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:22.351 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:22.351 [261/268] Linking target lib/librte_security.so.24.1 00:01:22.351 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:22.351 [263/268] Linking target lib/librte_hash.so.24.1 00:01:22.351 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:22.351 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:22.351 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:22.609 [267/268] Linking target lib/librte_power.so.24.1 00:01:22.609 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:22.609 INFO: autodetecting backend as ninja 00:01:22.609 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:34.800 CC lib/log/log_deprecated.o 00:01:34.800 CC lib/log/log.o 00:01:34.800 CC lib/log/log_flags.o 00:01:34.800 CC lib/ut_mock/mock.o 00:01:34.800 CC lib/ut/ut.o 00:01:34.800 LIB libspdk_ut_mock.a 00:01:34.800 LIB libspdk_log.a 00:01:34.800 LIB libspdk_ut.a 00:01:34.800 SO libspdk_ut_mock.so.6.0 00:01:34.800 SO libspdk_log.so.7.1 00:01:34.800 SO libspdk_ut.so.2.0 00:01:34.800 SYMLINK libspdk_ut_mock.so 00:01:34.800 SYMLINK libspdk_ut.so 00:01:34.800 SYMLINK libspdk_log.so 00:01:34.800 CC lib/dma/dma.o 00:01:34.800 CXX lib/trace_parser/trace.o 00:01:34.800 CC lib/ioat/ioat.o 00:01:34.800 CC lib/util/base64.o 00:01:34.800 CC lib/util/bit_array.o 00:01:34.800 CC lib/util/cpuset.o 00:01:34.800 CC lib/util/crc16.o 00:01:34.800 CC lib/util/crc32.o 00:01:34.800 CC lib/util/crc32c.o 00:01:34.800 CC lib/util/crc32_ieee.o 00:01:34.800 CC lib/util/crc64.o 00:01:34.800 CC lib/util/dif.o 00:01:34.800 CC lib/util/fd.o 00:01:34.800 CC lib/util/fd_group.o 00:01:34.800 CC lib/util/file.o 00:01:34.800 CC lib/util/hexlify.o 00:01:34.800 CC lib/util/iov.o 00:01:34.800 CC lib/util/math.o 00:01:34.800 CC lib/util/net.o 00:01:34.800 CC lib/util/pipe.o 00:01:34.800 CC lib/util/strerror_tls.o 00:01:34.800 CC lib/util/string.o 00:01:34.800 CC lib/util/uuid.o 00:01:34.800 CC lib/util/xor.o 00:01:34.800 CC lib/util/zipf.o 00:01:34.800 CC lib/util/md5.o 00:01:34.800 CC lib/vfio_user/host/vfio_user.o 00:01:34.800 CC lib/vfio_user/host/vfio_user_pci.o 00:01:34.800 LIB libspdk_dma.a 00:01:34.800 SO libspdk_dma.so.5.0 00:01:34.800 LIB libspdk_ioat.a 00:01:34.801 SYMLINK libspdk_dma.so 00:01:34.801 SO libspdk_ioat.so.7.0 00:01:34.801 SYMLINK libspdk_ioat.so 00:01:34.801 LIB libspdk_vfio_user.a 00:01:34.801 SO libspdk_vfio_user.so.5.0 00:01:34.801 LIB libspdk_util.a 00:01:34.801 SYMLINK libspdk_vfio_user.so 00:01:34.801 SO libspdk_util.so.10.1 00:01:34.801 SYMLINK libspdk_util.so 00:01:34.801 LIB libspdk_trace_parser.a 00:01:34.801 SO libspdk_trace_parser.so.6.0 00:01:34.801 SYMLINK libspdk_trace_parser.so 00:01:34.801 CC lib/rdma_utils/rdma_utils.o 00:01:34.801 CC lib/vmd/vmd.o 00:01:34.801 CC lib/vmd/led.o 00:01:35.059 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:35.059 CC lib/json/json_parse.o 00:01:35.059 CC lib/rdma_provider/common.o 00:01:35.059 CC lib/json/json_util.o 00:01:35.059 CC lib/json/json_write.o 00:01:35.059 CC lib/conf/conf.o 00:01:35.059 CC lib/idxd/idxd.o 00:01:35.059 CC lib/idxd/idxd_kernel.o 00:01:35.059 CC lib/idxd/idxd_user.o 00:01:35.059 CC lib/env_dpdk/env.o 00:01:35.059 CC lib/env_dpdk/memory.o 00:01:35.059 CC lib/env_dpdk/pci.o 00:01:35.059 CC lib/env_dpdk/init.o 00:01:35.059 CC lib/env_dpdk/threads.o 00:01:35.059 CC lib/env_dpdk/pci_ioat.o 00:01:35.059 CC lib/env_dpdk/pci_virtio.o 00:01:35.059 CC lib/env_dpdk/pci_vmd.o 00:01:35.059 CC lib/env_dpdk/pci_idxd.o 00:01:35.059 CC lib/env_dpdk/pci_event.o 00:01:35.059 CC lib/env_dpdk/sigbus_handler.o 00:01:35.059 CC lib/env_dpdk/pci_dpdk.o 00:01:35.059 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:35.059 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:35.059 LIB libspdk_rdma_provider.a 00:01:35.059 LIB libspdk_conf.a 00:01:35.317 LIB libspdk_rdma_utils.a 00:01:35.317 SO libspdk_conf.so.6.0 00:01:35.317 SO libspdk_rdma_provider.so.6.0 00:01:35.317 SO libspdk_rdma_utils.so.1.0 00:01:35.317 LIB libspdk_json.a 00:01:35.317 SYMLINK libspdk_conf.so 00:01:35.317 SYMLINK libspdk_rdma_provider.so 00:01:35.317 SO libspdk_json.so.6.0 00:01:35.317 SYMLINK libspdk_rdma_utils.so 00:01:35.317 SYMLINK libspdk_json.so 00:01:35.317 LIB libspdk_idxd.a 00:01:35.575 LIB libspdk_vmd.a 00:01:35.575 SO libspdk_idxd.so.12.1 00:01:35.575 SO libspdk_vmd.so.6.0 00:01:35.575 SYMLINK libspdk_idxd.so 00:01:35.575 SYMLINK libspdk_vmd.so 00:01:35.575 CC lib/jsonrpc/jsonrpc_server.o 00:01:35.575 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:35.575 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:35.575 CC lib/jsonrpc/jsonrpc_client.o 00:01:35.833 LIB libspdk_jsonrpc.a 00:01:35.833 SO libspdk_jsonrpc.so.6.0 00:01:35.833 SYMLINK libspdk_jsonrpc.so 00:01:36.091 LIB libspdk_env_dpdk.a 00:01:36.091 SO libspdk_env_dpdk.so.15.1 00:01:36.091 SYMLINK libspdk_env_dpdk.so 00:01:36.350 CC lib/rpc/rpc.o 00:01:36.350 LIB libspdk_rpc.a 00:01:36.350 SO libspdk_rpc.so.6.0 00:01:36.608 SYMLINK libspdk_rpc.so 00:01:36.867 CC lib/notify/notify.o 00:01:36.867 CC lib/notify/notify_rpc.o 00:01:36.867 CC lib/trace/trace.o 00:01:36.867 CC lib/trace/trace_flags.o 00:01:36.867 CC lib/trace/trace_rpc.o 00:01:36.867 CC lib/keyring/keyring.o 00:01:36.867 CC lib/keyring/keyring_rpc.o 00:01:36.867 LIB libspdk_notify.a 00:01:36.867 SO libspdk_notify.so.6.0 00:01:37.125 LIB libspdk_trace.a 00:01:37.125 LIB libspdk_keyring.a 00:01:37.125 SYMLINK libspdk_notify.so 00:01:37.125 SO libspdk_trace.so.11.0 00:01:37.125 SO libspdk_keyring.so.2.0 00:01:37.125 SYMLINK libspdk_trace.so 00:01:37.125 SYMLINK libspdk_keyring.so 00:01:37.384 CC lib/sock/sock.o 00:01:37.384 CC lib/sock/sock_rpc.o 00:01:37.384 CC lib/thread/thread.o 00:01:37.384 CC lib/thread/iobuf.o 00:01:37.642 LIB libspdk_sock.a 00:01:37.642 SO libspdk_sock.so.10.0 00:01:37.900 SYMLINK libspdk_sock.so 00:01:38.159 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:38.159 CC lib/nvme/nvme_ctrlr.o 00:01:38.159 CC lib/nvme/nvme_fabric.o 00:01:38.159 CC lib/nvme/nvme_ns_cmd.o 00:01:38.159 CC lib/nvme/nvme_ns.o 00:01:38.159 CC lib/nvme/nvme_qpair.o 00:01:38.159 CC lib/nvme/nvme_pcie_common.o 00:01:38.159 CC lib/nvme/nvme_pcie.o 00:01:38.159 CC lib/nvme/nvme.o 00:01:38.159 CC lib/nvme/nvme_quirks.o 00:01:38.159 CC lib/nvme/nvme_transport.o 00:01:38.159 CC lib/nvme/nvme_discovery.o 00:01:38.159 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:38.159 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:38.159 CC lib/nvme/nvme_tcp.o 00:01:38.159 CC lib/nvme/nvme_opal.o 00:01:38.159 CC lib/nvme/nvme_io_msg.o 00:01:38.159 CC lib/nvme/nvme_poll_group.o 00:01:38.159 CC lib/nvme/nvme_stubs.o 00:01:38.159 CC lib/nvme/nvme_zns.o 00:01:38.159 CC lib/nvme/nvme_auth.o 00:01:38.159 CC lib/nvme/nvme_cuse.o 00:01:38.159 CC lib/nvme/nvme_rdma.o 00:01:38.159 CC lib/nvme/nvme_vfio_user.o 00:01:38.418 LIB libspdk_thread.a 00:01:38.418 SO libspdk_thread.so.11.0 00:01:38.677 SYMLINK libspdk_thread.so 00:01:38.935 CC lib/accel/accel.o 00:01:38.935 CC lib/accel/accel_sw.o 00:01:38.935 CC lib/accel/accel_rpc.o 00:01:38.935 CC lib/blob/blobstore.o 00:01:38.935 CC lib/blob/request.o 00:01:38.935 CC lib/blob/zeroes.o 00:01:38.935 CC lib/blob/blob_bs_dev.o 00:01:38.935 CC lib/vfu_tgt/tgt_endpoint.o 00:01:38.935 CC lib/vfu_tgt/tgt_rpc.o 00:01:38.935 CC lib/fsdev/fsdev_io.o 00:01:38.935 CC lib/fsdev/fsdev.o 00:01:38.935 CC lib/virtio/virtio.o 00:01:38.935 CC lib/fsdev/fsdev_rpc.o 00:01:38.935 CC lib/virtio/virtio_vhost_user.o 00:01:38.935 CC lib/virtio/virtio_pci.o 00:01:38.935 CC lib/virtio/virtio_vfio_user.o 00:01:38.935 CC lib/init/json_config.o 00:01:38.935 CC lib/init/subsystem.o 00:01:38.935 CC lib/init/subsystem_rpc.o 00:01:38.935 CC lib/init/rpc.o 00:01:39.193 LIB libspdk_init.a 00:01:39.193 SO libspdk_init.so.6.0 00:01:39.193 LIB libspdk_virtio.a 00:01:39.193 LIB libspdk_vfu_tgt.a 00:01:39.193 SO libspdk_virtio.so.7.0 00:01:39.193 SO libspdk_vfu_tgt.so.3.0 00:01:39.193 SYMLINK libspdk_init.so 00:01:39.193 SYMLINK libspdk_virtio.so 00:01:39.193 SYMLINK libspdk_vfu_tgt.so 00:01:39.452 LIB libspdk_fsdev.a 00:01:39.452 SO libspdk_fsdev.so.2.0 00:01:39.452 CC lib/event/app.o 00:01:39.452 CC lib/event/reactor.o 00:01:39.452 CC lib/event/log_rpc.o 00:01:39.452 CC lib/event/app_rpc.o 00:01:39.452 CC lib/event/scheduler_static.o 00:01:39.452 SYMLINK libspdk_fsdev.so 00:01:39.711 LIB libspdk_accel.a 00:01:39.711 SO libspdk_accel.so.16.0 00:01:39.711 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:39.711 LIB libspdk_nvme.a 00:01:39.711 SYMLINK libspdk_accel.so 00:01:39.970 LIB libspdk_event.a 00:01:39.970 SO libspdk_event.so.14.0 00:01:39.970 SO libspdk_nvme.so.15.0 00:01:39.970 SYMLINK libspdk_event.so 00:01:39.970 CC lib/bdev/bdev.o 00:01:39.970 CC lib/bdev/bdev_rpc.o 00:01:39.970 CC lib/bdev/bdev_zone.o 00:01:39.970 CC lib/bdev/part.o 00:01:39.970 CC lib/bdev/scsi_nvme.o 00:01:39.970 SYMLINK libspdk_nvme.so 00:01:40.228 LIB libspdk_fuse_dispatcher.a 00:01:40.228 SO libspdk_fuse_dispatcher.so.1.0 00:01:40.487 SYMLINK libspdk_fuse_dispatcher.so 00:01:41.054 LIB libspdk_blob.a 00:01:41.054 SO libspdk_blob.so.11.0 00:01:41.054 SYMLINK libspdk_blob.so 00:01:41.313 CC lib/blobfs/blobfs.o 00:01:41.313 CC lib/blobfs/tree.o 00:01:41.571 CC lib/lvol/lvol.o 00:01:41.829 LIB libspdk_bdev.a 00:01:41.829 SO libspdk_bdev.so.17.0 00:01:42.087 SYMLINK libspdk_bdev.so 00:01:42.087 LIB libspdk_blobfs.a 00:01:42.087 SO libspdk_blobfs.so.10.0 00:01:42.087 LIB libspdk_lvol.a 00:01:42.087 SYMLINK libspdk_blobfs.so 00:01:42.087 SO libspdk_lvol.so.10.0 00:01:42.087 SYMLINK libspdk_lvol.so 00:01:42.347 CC lib/nbd/nbd.o 00:01:42.347 CC lib/nbd/nbd_rpc.o 00:01:42.347 CC lib/ftl/ftl_core.o 00:01:42.347 CC lib/ftl/ftl_layout.o 00:01:42.347 CC lib/ftl/ftl_init.o 00:01:42.347 CC lib/ftl/ftl_debug.o 00:01:42.347 CC lib/ftl/ftl_sb.o 00:01:42.347 CC lib/ftl/ftl_io.o 00:01:42.347 CC lib/ftl/ftl_l2p.o 00:01:42.347 CC lib/ftl/ftl_l2p_flat.o 00:01:42.347 CC lib/ftl/ftl_nv_cache.o 00:01:42.347 CC lib/ftl/ftl_band.o 00:01:42.347 CC lib/ftl/ftl_band_ops.o 00:01:42.347 CC lib/ftl/ftl_writer.o 00:01:42.347 CC lib/nvmf/ctrlr.o 00:01:42.347 CC lib/ftl/ftl_rq.o 00:01:42.347 CC lib/nvmf/ctrlr_discovery.o 00:01:42.347 CC lib/ftl/ftl_reloc.o 00:01:42.347 CC lib/ftl/ftl_l2p_cache.o 00:01:42.347 CC lib/nvmf/ctrlr_bdev.o 00:01:42.347 CC lib/nvmf/nvmf_rpc.o 00:01:42.347 CC lib/nvmf/subsystem.o 00:01:42.347 CC lib/ftl/ftl_p2l.o 00:01:42.347 CC lib/ftl/ftl_p2l_log.o 00:01:42.347 CC lib/nvmf/nvmf.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt.o 00:01:42.347 CC lib/nvmf/transport.o 00:01:42.347 CC lib/nvmf/tcp.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:42.347 CC lib/scsi/dev.o 00:01:42.347 CC lib/ublk/ublk.o 00:01:42.347 CC lib/nvmf/vfio_user.o 00:01:42.347 CC lib/ublk/ublk_rpc.o 00:01:42.347 CC lib/scsi/port.o 00:01:42.347 CC lib/scsi/lun.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:42.347 CC lib/nvmf/stubs.o 00:01:42.347 CC lib/scsi/scsi.o 00:01:42.347 CC lib/scsi/scsi_bdev.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:42.347 CC lib/nvmf/mdns_server.o 00:01:42.347 CC lib/nvmf/auth.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:42.347 CC lib/nvmf/rdma.o 00:01:42.347 CC lib/scsi/scsi_pr.o 00:01:42.347 CC lib/scsi/scsi_rpc.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:42.347 CC lib/scsi/task.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:42.347 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:42.347 CC lib/ftl/utils/ftl_conf.o 00:01:42.347 CC lib/ftl/utils/ftl_md.o 00:01:42.347 CC lib/ftl/utils/ftl_bitmap.o 00:01:42.347 CC lib/ftl/utils/ftl_mempool.o 00:01:42.347 CC lib/ftl/utils/ftl_property.o 00:01:42.347 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:42.347 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:42.347 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:42.347 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:42.347 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:42.347 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:42.347 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:42.347 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:42.347 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:42.347 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:42.347 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:42.347 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:42.347 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:42.347 CC lib/ftl/ftl_trace.o 00:01:42.347 CC lib/ftl/base/ftl_base_dev.o 00:01:42.347 CC lib/ftl/base/ftl_base_bdev.o 00:01:42.912 LIB libspdk_nbd.a 00:01:42.912 SO libspdk_nbd.so.7.0 00:01:42.912 SYMLINK libspdk_nbd.so 00:01:42.912 LIB libspdk_scsi.a 00:01:42.912 SO libspdk_scsi.so.9.0 00:01:43.169 SYMLINK libspdk_scsi.so 00:01:43.169 LIB libspdk_ublk.a 00:01:43.169 SO libspdk_ublk.so.3.0 00:01:43.169 SYMLINK libspdk_ublk.so 00:01:43.169 LIB libspdk_ftl.a 00:01:43.425 CC lib/iscsi/conn.o 00:01:43.425 CC lib/iscsi/init_grp.o 00:01:43.425 CC lib/vhost/vhost.o 00:01:43.425 CC lib/iscsi/iscsi.o 00:01:43.425 CC lib/vhost/vhost_rpc.o 00:01:43.425 CC lib/iscsi/param.o 00:01:43.425 CC lib/vhost/vhost_scsi.o 00:01:43.425 CC lib/vhost/vhost_blk.o 00:01:43.425 CC lib/iscsi/portal_grp.o 00:01:43.425 CC lib/vhost/rte_vhost_user.o 00:01:43.425 CC lib/iscsi/tgt_node.o 00:01:43.425 CC lib/iscsi/iscsi_subsystem.o 00:01:43.425 CC lib/iscsi/iscsi_rpc.o 00:01:43.425 CC lib/iscsi/task.o 00:01:43.425 SO libspdk_ftl.so.9.0 00:01:43.682 SYMLINK libspdk_ftl.so 00:01:44.247 LIB libspdk_nvmf.a 00:01:44.247 LIB libspdk_vhost.a 00:01:44.247 SO libspdk_nvmf.so.20.0 00:01:44.247 SO libspdk_vhost.so.8.0 00:01:44.247 SYMLINK libspdk_vhost.so 00:01:44.247 LIB libspdk_iscsi.a 00:01:44.247 SYMLINK libspdk_nvmf.so 00:01:44.247 SO libspdk_iscsi.so.8.0 00:01:44.504 SYMLINK libspdk_iscsi.so 00:01:45.070 CC module/env_dpdk/env_dpdk_rpc.o 00:01:45.070 CC module/vfu_device/vfu_virtio.o 00:01:45.070 CC module/vfu_device/vfu_virtio_blk.o 00:01:45.070 CC module/vfu_device/vfu_virtio_scsi.o 00:01:45.070 CC module/vfu_device/vfu_virtio_rpc.o 00:01:45.070 CC module/vfu_device/vfu_virtio_fs.o 00:01:45.070 CC module/sock/posix/posix.o 00:01:45.070 CC module/keyring/file/keyring.o 00:01:45.070 CC module/keyring/file/keyring_rpc.o 00:01:45.070 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:45.070 CC module/keyring/linux/keyring.o 00:01:45.070 CC module/keyring/linux/keyring_rpc.o 00:01:45.070 CC module/fsdev/aio/fsdev_aio.o 00:01:45.070 CC module/accel/ioat/accel_ioat_rpc.o 00:01:45.070 CC module/accel/ioat/accel_ioat.o 00:01:45.070 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:45.070 CC module/fsdev/aio/linux_aio_mgr.o 00:01:45.070 CC module/accel/error/accel_error.o 00:01:45.070 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:45.070 CC module/accel/error/accel_error_rpc.o 00:01:45.070 CC module/accel/dsa/accel_dsa.o 00:01:45.070 CC module/scheduler/gscheduler/gscheduler.o 00:01:45.070 CC module/accel/dsa/accel_dsa_rpc.o 00:01:45.070 CC module/accel/iaa/accel_iaa.o 00:01:45.070 CC module/accel/iaa/accel_iaa_rpc.o 00:01:45.070 CC module/blob/bdev/blob_bdev.o 00:01:45.070 LIB libspdk_env_dpdk_rpc.a 00:01:45.070 SO libspdk_env_dpdk_rpc.so.6.0 00:01:45.328 SYMLINK libspdk_env_dpdk_rpc.so 00:01:45.328 LIB libspdk_keyring_file.a 00:01:45.328 LIB libspdk_scheduler_dpdk_governor.a 00:01:45.328 LIB libspdk_scheduler_gscheduler.a 00:01:45.328 SO libspdk_keyring_file.so.2.0 00:01:45.328 LIB libspdk_keyring_linux.a 00:01:45.328 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:45.328 SO libspdk_scheduler_gscheduler.so.4.0 00:01:45.328 LIB libspdk_accel_error.a 00:01:45.328 LIB libspdk_scheduler_dynamic.a 00:01:45.328 LIB libspdk_accel_ioat.a 00:01:45.328 SO libspdk_keyring_linux.so.1.0 00:01:45.328 SYMLINK libspdk_keyring_file.so 00:01:45.328 SO libspdk_scheduler_dynamic.so.4.0 00:01:45.328 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:45.328 LIB libspdk_accel_iaa.a 00:01:45.328 SO libspdk_accel_error.so.2.0 00:01:45.328 SO libspdk_accel_ioat.so.6.0 00:01:45.328 SYMLINK libspdk_scheduler_gscheduler.so 00:01:45.328 SO libspdk_accel_iaa.so.3.0 00:01:45.328 SYMLINK libspdk_keyring_linux.so 00:01:45.328 LIB libspdk_accel_dsa.a 00:01:45.328 LIB libspdk_blob_bdev.a 00:01:45.328 SYMLINK libspdk_scheduler_dynamic.so 00:01:45.328 SO libspdk_accel_dsa.so.5.0 00:01:45.328 SYMLINK libspdk_accel_error.so 00:01:45.328 SYMLINK libspdk_accel_ioat.so 00:01:45.328 SO libspdk_blob_bdev.so.11.0 00:01:45.328 SYMLINK libspdk_accel_iaa.so 00:01:45.328 SYMLINK libspdk_blob_bdev.so 00:01:45.328 SYMLINK libspdk_accel_dsa.so 00:01:45.587 LIB libspdk_vfu_device.a 00:01:45.587 SO libspdk_vfu_device.so.3.0 00:01:45.587 SYMLINK libspdk_vfu_device.so 00:01:45.587 LIB libspdk_fsdev_aio.a 00:01:45.587 SO libspdk_fsdev_aio.so.1.0 00:01:45.587 LIB libspdk_sock_posix.a 00:01:45.845 SYMLINK libspdk_fsdev_aio.so 00:01:45.846 SO libspdk_sock_posix.so.6.0 00:01:45.846 SYMLINK libspdk_sock_posix.so 00:01:45.846 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:45.846 CC module/bdev/delay/vbdev_delay.o 00:01:45.846 CC module/bdev/malloc/bdev_malloc.o 00:01:45.846 CC module/bdev/null/bdev_null_rpc.o 00:01:45.846 CC module/bdev/null/bdev_null.o 00:01:45.846 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:45.846 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:45.846 CC module/bdev/iscsi/bdev_iscsi.o 00:01:45.846 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:45.846 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:45.846 CC module/bdev/lvol/vbdev_lvol.o 00:01:45.846 CC module/bdev/gpt/gpt.o 00:01:45.846 CC module/bdev/error/vbdev_error.o 00:01:45.846 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:45.846 CC module/bdev/gpt/vbdev_gpt.o 00:01:45.846 CC module/bdev/error/vbdev_error_rpc.o 00:01:45.846 CC module/bdev/aio/bdev_aio_rpc.o 00:01:45.846 CC module/bdev/passthru/vbdev_passthru.o 00:01:45.846 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:45.846 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:45.846 CC module/bdev/aio/bdev_aio.o 00:01:45.846 CC module/blobfs/bdev/blobfs_bdev.o 00:01:45.846 CC module/bdev/ftl/bdev_ftl.o 00:01:45.846 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:45.846 CC module/bdev/split/vbdev_split.o 00:01:45.846 CC module/bdev/raid/bdev_raid.o 00:01:45.846 CC module/bdev/raid/bdev_raid_rpc.o 00:01:45.846 CC module/bdev/split/vbdev_split_rpc.o 00:01:45.846 CC module/bdev/raid/raid0.o 00:01:45.846 CC module/bdev/raid/bdev_raid_sb.o 00:01:45.846 CC module/bdev/nvme/bdev_nvme.o 00:01:45.846 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:45.846 CC module/bdev/nvme/nvme_rpc.o 00:01:45.846 CC module/bdev/raid/raid1.o 00:01:45.846 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:45.846 CC module/bdev/nvme/bdev_mdns_client.o 00:01:45.846 CC module/bdev/raid/concat.o 00:01:45.846 CC module/bdev/nvme/vbdev_opal.o 00:01:45.846 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:45.846 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:45.846 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:45.846 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:46.103 LIB libspdk_blobfs_bdev.a 00:01:46.103 SO libspdk_blobfs_bdev.so.6.0 00:01:46.103 LIB libspdk_bdev_error.a 00:01:46.103 LIB libspdk_bdev_split.a 00:01:46.103 LIB libspdk_bdev_null.a 00:01:46.103 SO libspdk_bdev_error.so.6.0 00:01:46.360 LIB libspdk_bdev_ftl.a 00:01:46.360 LIB libspdk_bdev_gpt.a 00:01:46.360 SO libspdk_bdev_split.so.6.0 00:01:46.360 SYMLINK libspdk_blobfs_bdev.so 00:01:46.360 SO libspdk_bdev_null.so.6.0 00:01:46.360 LIB libspdk_bdev_aio.a 00:01:46.360 SO libspdk_bdev_gpt.so.6.0 00:01:46.360 SO libspdk_bdev_ftl.so.6.0 00:01:46.360 LIB libspdk_bdev_malloc.a 00:01:46.360 LIB libspdk_bdev_delay.a 00:01:46.360 LIB libspdk_bdev_zone_block.a 00:01:46.360 LIB libspdk_bdev_passthru.a 00:01:46.360 LIB libspdk_bdev_iscsi.a 00:01:46.360 SYMLINK libspdk_bdev_error.so 00:01:46.360 SO libspdk_bdev_aio.so.6.0 00:01:46.360 SO libspdk_bdev_malloc.so.6.0 00:01:46.360 SO libspdk_bdev_delay.so.6.0 00:01:46.360 SYMLINK libspdk_bdev_split.so 00:01:46.360 SO libspdk_bdev_passthru.so.6.0 00:01:46.360 SYMLINK libspdk_bdev_gpt.so 00:01:46.360 SYMLINK libspdk_bdev_null.so 00:01:46.360 SO libspdk_bdev_zone_block.so.6.0 00:01:46.360 SO libspdk_bdev_iscsi.so.6.0 00:01:46.360 SYMLINK libspdk_bdev_ftl.so 00:01:46.360 SYMLINK libspdk_bdev_delay.so 00:01:46.360 SYMLINK libspdk_bdev_aio.so 00:01:46.360 SYMLINK libspdk_bdev_malloc.so 00:01:46.360 SYMLINK libspdk_bdev_zone_block.so 00:01:46.360 SYMLINK libspdk_bdev_iscsi.so 00:01:46.360 SYMLINK libspdk_bdev_passthru.so 00:01:46.360 LIB libspdk_bdev_lvol.a 00:01:46.360 SO libspdk_bdev_lvol.so.6.0 00:01:46.360 LIB libspdk_bdev_virtio.a 00:01:46.360 SO libspdk_bdev_virtio.so.6.0 00:01:46.617 SYMLINK libspdk_bdev_lvol.so 00:01:46.617 SYMLINK libspdk_bdev_virtio.so 00:01:46.874 LIB libspdk_bdev_raid.a 00:01:46.874 SO libspdk_bdev_raid.so.6.0 00:01:46.874 SYMLINK libspdk_bdev_raid.so 00:01:47.804 LIB libspdk_bdev_nvme.a 00:01:47.804 SO libspdk_bdev_nvme.so.7.1 00:01:47.804 SYMLINK libspdk_bdev_nvme.so 00:01:48.738 CC module/event/subsystems/keyring/keyring.o 00:01:48.738 CC module/event/subsystems/scheduler/scheduler.o 00:01:48.738 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:48.738 CC module/event/subsystems/vmd/vmd.o 00:01:48.738 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.738 CC module/event/subsystems/iobuf/iobuf.o 00:01:48.738 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:48.738 CC module/event/subsystems/sock/sock.o 00:01:48.738 CC module/event/subsystems/fsdev/fsdev.o 00:01:48.738 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:48.738 LIB libspdk_event_vfu_tgt.a 00:01:48.738 SO libspdk_event_vfu_tgt.so.3.0 00:01:48.738 LIB libspdk_event_fsdev.a 00:01:48.738 LIB libspdk_event_keyring.a 00:01:48.738 LIB libspdk_event_vhost_blk.a 00:01:48.738 LIB libspdk_event_sock.a 00:01:48.738 LIB libspdk_event_scheduler.a 00:01:48.738 LIB libspdk_event_vmd.a 00:01:48.738 LIB libspdk_event_iobuf.a 00:01:48.738 SO libspdk_event_sock.so.5.0 00:01:48.738 SO libspdk_event_fsdev.so.1.0 00:01:48.738 SO libspdk_event_keyring.so.1.0 00:01:48.738 SO libspdk_event_vhost_blk.so.3.0 00:01:48.738 SO libspdk_event_scheduler.so.4.0 00:01:48.738 SO libspdk_event_vmd.so.6.0 00:01:48.738 SYMLINK libspdk_event_vfu_tgt.so 00:01:48.738 SO libspdk_event_iobuf.so.3.0 00:01:48.738 SYMLINK libspdk_event_sock.so 00:01:48.738 SYMLINK libspdk_event_fsdev.so 00:01:48.738 SYMLINK libspdk_event_keyring.so 00:01:48.738 SYMLINK libspdk_event_vhost_blk.so 00:01:48.738 SYMLINK libspdk_event_scheduler.so 00:01:48.738 SYMLINK libspdk_event_vmd.so 00:01:48.738 SYMLINK libspdk_event_iobuf.so 00:01:48.996 CC module/event/subsystems/accel/accel.o 00:01:49.253 LIB libspdk_event_accel.a 00:01:49.253 SO libspdk_event_accel.so.6.0 00:01:49.253 SYMLINK libspdk_event_accel.so 00:01:49.510 CC module/event/subsystems/bdev/bdev.o 00:01:49.766 LIB libspdk_event_bdev.a 00:01:49.766 SO libspdk_event_bdev.so.6.0 00:01:49.766 SYMLINK libspdk_event_bdev.so 00:01:50.023 CC module/event/subsystems/nbd/nbd.o 00:01:50.023 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:50.023 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:50.280 CC module/event/subsystems/ublk/ublk.o 00:01:50.280 CC module/event/subsystems/scsi/scsi.o 00:01:50.280 LIB libspdk_event_nbd.a 00:01:50.280 LIB libspdk_event_ublk.a 00:01:50.280 SO libspdk_event_nbd.so.6.0 00:01:50.280 LIB libspdk_event_scsi.a 00:01:50.280 SO libspdk_event_ublk.so.3.0 00:01:50.280 LIB libspdk_event_nvmf.a 00:01:50.280 SO libspdk_event_scsi.so.6.0 00:01:50.280 SYMLINK libspdk_event_nbd.so 00:01:50.280 SO libspdk_event_nvmf.so.6.0 00:01:50.280 SYMLINK libspdk_event_ublk.so 00:01:50.537 SYMLINK libspdk_event_scsi.so 00:01:50.537 SYMLINK libspdk_event_nvmf.so 00:01:50.795 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:50.795 CC module/event/subsystems/iscsi/iscsi.o 00:01:50.795 LIB libspdk_event_vhost_scsi.a 00:01:50.795 LIB libspdk_event_iscsi.a 00:01:50.795 SO libspdk_event_vhost_scsi.so.3.0 00:01:50.795 SO libspdk_event_iscsi.so.6.0 00:01:51.051 SYMLINK libspdk_event_vhost_scsi.so 00:01:51.051 SYMLINK libspdk_event_iscsi.so 00:01:51.051 SO libspdk.so.6.0 00:01:51.051 SYMLINK libspdk.so 00:01:51.309 CXX app/trace/trace.o 00:01:51.309 CC app/spdk_nvme_perf/perf.o 00:01:51.309 CC app/spdk_top/spdk_top.o 00:01:51.309 CC app/spdk_nvme_identify/identify.o 00:01:51.309 TEST_HEADER include/spdk/accel.h 00:01:51.309 TEST_HEADER include/spdk/assert.h 00:01:51.309 TEST_HEADER include/spdk/accel_module.h 00:01:51.309 TEST_HEADER include/spdk/barrier.h 00:01:51.309 CC app/spdk_lspci/spdk_lspci.o 00:01:51.309 TEST_HEADER include/spdk/bdev.h 00:01:51.309 CC app/trace_record/trace_record.o 00:01:51.310 TEST_HEADER include/spdk/base64.h 00:01:51.310 TEST_HEADER include/spdk/bdev_zone.h 00:01:51.310 TEST_HEADER include/spdk/bdev_module.h 00:01:51.310 TEST_HEADER include/spdk/bit_array.h 00:01:51.310 TEST_HEADER include/spdk/bit_pool.h 00:01:51.310 TEST_HEADER include/spdk/blob_bdev.h 00:01:51.310 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:51.310 TEST_HEADER include/spdk/blobfs.h 00:01:51.310 TEST_HEADER include/spdk/conf.h 00:01:51.310 TEST_HEADER include/spdk/blob.h 00:01:51.310 TEST_HEADER include/spdk/config.h 00:01:51.310 TEST_HEADER include/spdk/crc16.h 00:01:51.310 TEST_HEADER include/spdk/crc32.h 00:01:51.310 TEST_HEADER include/spdk/crc64.h 00:01:51.310 TEST_HEADER include/spdk/cpuset.h 00:01:51.310 TEST_HEADER include/spdk/dif.h 00:01:51.310 TEST_HEADER include/spdk/endian.h 00:01:51.310 TEST_HEADER include/spdk/dma.h 00:01:51.310 CC test/rpc_client/rpc_client_test.o 00:01:51.310 TEST_HEADER include/spdk/env_dpdk.h 00:01:51.310 TEST_HEADER include/spdk/env.h 00:01:51.310 CC app/spdk_nvme_discover/discovery_aer.o 00:01:51.310 TEST_HEADER include/spdk/fd_group.h 00:01:51.310 TEST_HEADER include/spdk/event.h 00:01:51.310 TEST_HEADER include/spdk/fd.h 00:01:51.310 TEST_HEADER include/spdk/fsdev.h 00:01:51.310 TEST_HEADER include/spdk/file.h 00:01:51.310 TEST_HEADER include/spdk/ftl.h 00:01:51.310 TEST_HEADER include/spdk/fsdev_module.h 00:01:51.310 TEST_HEADER include/spdk/fuse_dispatcher.h 00:01:51.310 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:51.310 TEST_HEADER include/spdk/gpt_spec.h 00:01:51.310 TEST_HEADER include/spdk/hexlify.h 00:01:51.310 TEST_HEADER include/spdk/histogram_data.h 00:01:51.575 TEST_HEADER include/spdk/idxd_spec.h 00:01:51.575 TEST_HEADER include/spdk/idxd.h 00:01:51.575 TEST_HEADER include/spdk/init.h 00:01:51.575 TEST_HEADER include/spdk/ioat.h 00:01:51.575 TEST_HEADER include/spdk/ioat_spec.h 00:01:51.575 TEST_HEADER include/spdk/iscsi_spec.h 00:01:51.575 TEST_HEADER include/spdk/keyring.h 00:01:51.575 TEST_HEADER include/spdk/json.h 00:01:51.575 TEST_HEADER include/spdk/jsonrpc.h 00:01:51.575 TEST_HEADER include/spdk/likely.h 00:01:51.575 TEST_HEADER include/spdk/keyring_module.h 00:01:51.575 CC app/spdk_dd/spdk_dd.o 00:01:51.575 TEST_HEADER include/spdk/lvol.h 00:01:51.575 TEST_HEADER include/spdk/md5.h 00:01:51.575 TEST_HEADER include/spdk/log.h 00:01:51.575 TEST_HEADER include/spdk/nbd.h 00:01:51.575 TEST_HEADER include/spdk/memory.h 00:01:51.575 TEST_HEADER include/spdk/mmio.h 00:01:51.575 TEST_HEADER include/spdk/net.h 00:01:51.575 TEST_HEADER include/spdk/nvme.h 00:01:51.575 TEST_HEADER include/spdk/notify.h 00:01:51.575 TEST_HEADER include/spdk/nvme_intel.h 00:01:51.575 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:51.575 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:51.575 TEST_HEADER include/spdk/nvme_spec.h 00:01:51.575 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:51.575 TEST_HEADER include/spdk/nvmf.h 00:01:51.575 TEST_HEADER include/spdk/nvme_zns.h 00:01:51.575 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:51.575 TEST_HEADER include/spdk/nvmf_spec.h 00:01:51.575 TEST_HEADER include/spdk/opal.h 00:01:51.575 TEST_HEADER include/spdk/nvmf_transport.h 00:01:51.575 TEST_HEADER include/spdk/opal_spec.h 00:01:51.575 TEST_HEADER include/spdk/pipe.h 00:01:51.575 TEST_HEADER include/spdk/pci_ids.h 00:01:51.576 TEST_HEADER include/spdk/reduce.h 00:01:51.576 TEST_HEADER include/spdk/rpc.h 00:01:51.576 TEST_HEADER include/spdk/queue.h 00:01:51.576 TEST_HEADER include/spdk/scheduler.h 00:01:51.576 TEST_HEADER include/spdk/scsi.h 00:01:51.576 CC app/nvmf_tgt/nvmf_main.o 00:01:51.576 TEST_HEADER include/spdk/scsi_spec.h 00:01:51.576 TEST_HEADER include/spdk/sock.h 00:01:51.576 TEST_HEADER include/spdk/string.h 00:01:51.576 TEST_HEADER include/spdk/stdinc.h 00:01:51.576 TEST_HEADER include/spdk/thread.h 00:01:51.576 TEST_HEADER include/spdk/trace.h 00:01:51.576 TEST_HEADER include/spdk/trace_parser.h 00:01:51.576 TEST_HEADER include/spdk/tree.h 00:01:51.576 TEST_HEADER include/spdk/util.h 00:01:51.576 TEST_HEADER include/spdk/uuid.h 00:01:51.576 TEST_HEADER include/spdk/ublk.h 00:01:51.576 TEST_HEADER include/spdk/version.h 00:01:51.576 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:51.576 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:51.576 TEST_HEADER include/spdk/vhost.h 00:01:51.576 TEST_HEADER include/spdk/xor.h 00:01:51.576 TEST_HEADER include/spdk/vmd.h 00:01:51.576 CC app/iscsi_tgt/iscsi_tgt.o 00:01:51.576 CXX test/cpp_headers/accel.o 00:01:51.576 TEST_HEADER include/spdk/zipf.h 00:01:51.576 CXX test/cpp_headers/accel_module.o 00:01:51.576 CXX test/cpp_headers/assert.o 00:01:51.576 CXX test/cpp_headers/base64.o 00:01:51.576 CXX test/cpp_headers/barrier.o 00:01:51.576 CC app/spdk_tgt/spdk_tgt.o 00:01:51.576 CXX test/cpp_headers/bdev.o 00:01:51.576 CXX test/cpp_headers/bdev_module.o 00:01:51.576 CXX test/cpp_headers/bdev_zone.o 00:01:51.576 CXX test/cpp_headers/blob_bdev.o 00:01:51.576 CXX test/cpp_headers/bit_array.o 00:01:51.576 CXX test/cpp_headers/blobfs_bdev.o 00:01:51.576 CXX test/cpp_headers/blob.o 00:01:51.576 CXX test/cpp_headers/bit_pool.o 00:01:51.576 CXX test/cpp_headers/config.o 00:01:51.576 CXX test/cpp_headers/conf.o 00:01:51.576 CXX test/cpp_headers/blobfs.o 00:01:51.576 CXX test/cpp_headers/cpuset.o 00:01:51.576 CXX test/cpp_headers/crc16.o 00:01:51.576 CXX test/cpp_headers/crc32.o 00:01:51.576 CXX test/cpp_headers/crc64.o 00:01:51.576 CXX test/cpp_headers/dma.o 00:01:51.576 CXX test/cpp_headers/dif.o 00:01:51.576 CXX test/cpp_headers/endian.o 00:01:51.576 CXX test/cpp_headers/env_dpdk.o 00:01:51.576 CXX test/cpp_headers/env.o 00:01:51.576 CXX test/cpp_headers/event.o 00:01:51.576 CXX test/cpp_headers/fd_group.o 00:01:51.576 CXX test/cpp_headers/fsdev.o 00:01:51.576 CXX test/cpp_headers/ftl.o 00:01:51.576 CXX test/cpp_headers/fd.o 00:01:51.576 CXX test/cpp_headers/file.o 00:01:51.576 CXX test/cpp_headers/fsdev_module.o 00:01:51.576 CXX test/cpp_headers/fuse_dispatcher.o 00:01:51.576 CXX test/cpp_headers/hexlify.o 00:01:51.576 CXX test/cpp_headers/histogram_data.o 00:01:51.576 CXX test/cpp_headers/gpt_spec.o 00:01:51.576 CXX test/cpp_headers/idxd.o 00:01:51.576 CXX test/cpp_headers/idxd_spec.o 00:01:51.576 CXX test/cpp_headers/init.o 00:01:51.576 CXX test/cpp_headers/ioat.o 00:01:51.576 CXX test/cpp_headers/iscsi_spec.o 00:01:51.576 CXX test/cpp_headers/ioat_spec.o 00:01:51.576 CXX test/cpp_headers/json.o 00:01:51.576 CXX test/cpp_headers/jsonrpc.o 00:01:51.576 CXX test/cpp_headers/keyring.o 00:01:51.576 CXX test/cpp_headers/keyring_module.o 00:01:51.576 CXX test/cpp_headers/likely.o 00:01:51.576 CXX test/cpp_headers/log.o 00:01:51.576 CXX test/cpp_headers/lvol.o 00:01:51.576 CXX test/cpp_headers/md5.o 00:01:51.576 CXX test/cpp_headers/memory.o 00:01:51.576 CXX test/cpp_headers/mmio.o 00:01:51.576 CXX test/cpp_headers/nbd.o 00:01:51.576 CXX test/cpp_headers/net.o 00:01:51.576 CXX test/cpp_headers/notify.o 00:01:51.576 CXX test/cpp_headers/nvme.o 00:01:51.576 CXX test/cpp_headers/nvme_intel.o 00:01:51.576 CXX test/cpp_headers/nvme_ocssd.o 00:01:51.576 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:51.576 CXX test/cpp_headers/nvme_spec.o 00:01:51.576 CXX test/cpp_headers/nvme_zns.o 00:01:51.576 CXX test/cpp_headers/nvmf_cmd.o 00:01:51.576 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:51.576 CXX test/cpp_headers/nvmf.o 00:01:51.576 CXX test/cpp_headers/nvmf_spec.o 00:01:51.576 CXX test/cpp_headers/nvmf_transport.o 00:01:51.576 CXX test/cpp_headers/opal.o 00:01:51.576 CC examples/util/zipf/zipf.o 00:01:51.576 CC app/fio/nvme/fio_plugin.o 00:01:51.576 CXX test/cpp_headers/opal_spec.o 00:01:51.576 CC examples/ioat/perf/perf.o 00:01:51.576 CC test/env/pci/pci_ut.o 00:01:51.576 CC test/app/histogram_perf/histogram_perf.o 00:01:51.576 CC examples/ioat/verify/verify.o 00:01:51.576 CC test/thread/poller_perf/poller_perf.o 00:01:51.576 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:51.576 CC test/app/stub/stub.o 00:01:51.576 CC test/env/memory/memory_ut.o 00:01:51.847 CC test/app/jsoncat/jsoncat.o 00:01:51.847 CC test/env/vtophys/vtophys.o 00:01:51.847 CC app/fio/bdev/fio_plugin.o 00:01:51.847 CC test/dma/test_dma/test_dma.o 00:01:51.847 CC test/app/bdev_svc/bdev_svc.o 00:01:51.847 LINK spdk_lspci 00:01:52.112 LINK spdk_trace_record 00:01:52.112 LINK interrupt_tgt 00:01:52.112 CC test/env/mem_callbacks/mem_callbacks.o 00:01:52.113 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:52.113 LINK rpc_client_test 00:01:52.113 LINK iscsi_tgt 00:01:52.113 CXX test/cpp_headers/pci_ids.o 00:01:52.113 LINK nvmf_tgt 00:01:52.113 LINK poller_perf 00:01:52.113 LINK spdk_tgt 00:01:52.113 CXX test/cpp_headers/pipe.o 00:01:52.113 CXX test/cpp_headers/queue.o 00:01:52.113 CXX test/cpp_headers/reduce.o 00:01:52.113 CXX test/cpp_headers/rpc.o 00:01:52.113 LINK spdk_nvme_discover 00:01:52.113 CXX test/cpp_headers/scheduler.o 00:01:52.113 CXX test/cpp_headers/scsi.o 00:01:52.113 CXX test/cpp_headers/sock.o 00:01:52.113 LINK stub 00:01:52.113 CXX test/cpp_headers/stdinc.o 00:01:52.113 CXX test/cpp_headers/scsi_spec.o 00:01:52.113 CXX test/cpp_headers/string.o 00:01:52.113 CXX test/cpp_headers/thread.o 00:01:52.113 LINK env_dpdk_post_init 00:01:52.113 CXX test/cpp_headers/trace.o 00:01:52.113 CXX test/cpp_headers/trace_parser.o 00:01:52.113 CXX test/cpp_headers/tree.o 00:01:52.113 CXX test/cpp_headers/ublk.o 00:01:52.113 CXX test/cpp_headers/util.o 00:01:52.113 CXX test/cpp_headers/uuid.o 00:01:52.113 CXX test/cpp_headers/version.o 00:01:52.113 CXX test/cpp_headers/vfio_user_pci.o 00:01:52.113 CXX test/cpp_headers/vfio_user_spec.o 00:01:52.113 CXX test/cpp_headers/vhost.o 00:01:52.113 CXX test/cpp_headers/vmd.o 00:01:52.113 CXX test/cpp_headers/xor.o 00:01:52.113 CXX test/cpp_headers/zipf.o 00:01:52.113 LINK ioat_perf 00:01:52.371 LINK zipf 00:01:52.371 LINK spdk_dd 00:01:52.371 LINK histogram_perf 00:01:52.371 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:52.371 LINK jsoncat 00:01:52.371 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:52.371 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:52.371 LINK spdk_trace 00:01:52.371 LINK vtophys 00:01:52.371 LINK verify 00:01:52.371 LINK bdev_svc 00:01:52.371 LINK pci_ut 00:01:52.629 LINK spdk_nvme 00:01:52.629 LINK spdk_bdev 00:01:52.629 CC test/event/reactor_perf/reactor_perf.o 00:01:52.629 CC test/event/reactor/reactor.o 00:01:52.629 CC test/event/event_perf/event_perf.o 00:01:52.629 CC test/event/app_repeat/app_repeat.o 00:01:52.629 CC app/vhost/vhost.o 00:01:52.629 LINK nvme_fuzz 00:01:52.629 CC test/event/scheduler/scheduler.o 00:01:52.629 LINK spdk_nvme_identify 00:01:52.629 LINK test_dma 00:01:52.629 CC examples/idxd/perf/perf.o 00:01:52.629 CC examples/sock/hello_world/hello_sock.o 00:01:52.629 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.629 CC examples/vmd/led/led.o 00:01:52.887 LINK vhost_fuzz 00:01:52.887 CC examples/thread/thread/thread_ex.o 00:01:52.887 LINK reactor_perf 00:01:52.887 LINK event_perf 00:01:52.887 LINK reactor 00:01:52.887 LINK spdk_nvme_perf 00:01:52.887 LINK spdk_top 00:01:52.887 LINK mem_callbacks 00:01:52.887 LINK app_repeat 00:01:52.887 LINK vhost 00:01:52.887 LINK lsvmd 00:01:52.887 LINK led 00:01:52.887 LINK scheduler 00:01:52.887 LINK hello_sock 00:01:53.146 LINK idxd_perf 00:01:53.146 LINK thread 00:01:53.146 LINK memory_ut 00:01:53.146 CC test/nvme/e2edp/nvme_dp.o 00:01:53.146 CC test/nvme/connect_stress/connect_stress.o 00:01:53.146 CC test/nvme/sgl/sgl.o 00:01:53.146 CC test/nvme/fused_ordering/fused_ordering.o 00:01:53.146 CC test/nvme/aer/aer.o 00:01:53.146 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:53.146 CC test/nvme/startup/startup.o 00:01:53.146 CC test/nvme/reset/reset.o 00:01:53.146 CC test/nvme/compliance/nvme_compliance.o 00:01:53.146 CC test/nvme/overhead/overhead.o 00:01:53.146 CC test/nvme/simple_copy/simple_copy.o 00:01:53.146 CC test/nvme/reserve/reserve.o 00:01:53.146 CC test/nvme/cuse/cuse.o 00:01:53.146 CC test/nvme/fdp/fdp.o 00:01:53.146 CC test/nvme/err_injection/err_injection.o 00:01:53.146 CC test/nvme/boot_partition/boot_partition.o 00:01:53.146 CC test/accel/dif/dif.o 00:01:53.146 CC test/blobfs/mkfs/mkfs.o 00:01:53.403 CC test/lvol/esnap/esnap.o 00:01:53.403 LINK connect_stress 00:01:53.403 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:53.403 LINK startup 00:01:53.403 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:53.403 LINK boot_partition 00:01:53.403 CC examples/nvme/arbitration/arbitration.o 00:01:53.403 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:53.403 CC examples/nvme/hello_world/hello_world.o 00:01:53.403 CC examples/nvme/hotplug/hotplug.o 00:01:53.403 LINK err_injection 00:01:53.403 CC examples/nvme/abort/abort.o 00:01:53.403 LINK fused_ordering 00:01:53.403 CC examples/nvme/reconnect/reconnect.o 00:01:53.403 LINK reserve 00:01:53.403 LINK doorbell_aers 00:01:53.403 LINK simple_copy 00:01:53.403 LINK mkfs 00:01:53.403 LINK nvme_dp 00:01:53.403 LINK reset 00:01:53.403 LINK aer 00:01:53.403 LINK sgl 00:01:53.403 LINK overhead 00:01:53.403 LINK nvme_compliance 00:01:53.660 CC examples/accel/perf/accel_perf.o 00:01:53.660 CC examples/blob/cli/blobcli.o 00:01:53.660 LINK fdp 00:01:53.660 CC examples/blob/hello_world/hello_blob.o 00:01:53.660 LINK pmr_persistence 00:01:53.660 CC examples/fsdev/hello_world/hello_fsdev.o 00:01:53.660 LINK cmb_copy 00:01:53.660 LINK hotplug 00:01:53.660 LINK hello_world 00:01:53.660 LINK arbitration 00:01:53.661 LINK iscsi_fuzz 00:01:53.661 LINK reconnect 00:01:53.661 LINK abort 00:01:53.920 LINK hello_blob 00:01:53.920 LINK dif 00:01:53.920 LINK nvme_manage 00:01:53.920 LINK hello_fsdev 00:01:53.920 LINK accel_perf 00:01:53.920 LINK blobcli 00:01:54.179 LINK cuse 00:01:54.438 CC test/bdev/bdevio/bdevio.o 00:01:54.438 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.438 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.696 LINK hello_bdev 00:01:54.696 LINK bdevio 00:01:54.955 LINK bdevperf 00:01:55.523 CC examples/nvmf/nvmf/nvmf.o 00:01:55.781 LINK nvmf 00:01:56.717 LINK esnap 00:01:57.284 00:01:57.284 real 0m54.927s 00:01:57.284 user 8m13.298s 00:01:57.284 sys 3m36.207s 00:01:57.284 16:13:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:57.284 16:13:23 make -- common/autotest_common.sh@10 -- $ set +x 00:01:57.284 ************************************ 00:01:57.284 END TEST make 00:01:57.284 ************************************ 00:01:57.284 16:13:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:57.284 16:13:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:57.284 16:13:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:57.284 16:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.284 16:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:57.284 16:13:23 -- pm/common@44 -- $ pid=2539216 00:01:57.284 16:13:23 -- pm/common@50 -- $ kill -TERM 2539216 00:01:57.284 16:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.284 16:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:57.284 16:13:23 -- pm/common@44 -- $ pid=2539217 00:01:57.284 16:13:23 -- pm/common@50 -- $ kill -TERM 2539217 00:01:57.284 16:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.284 16:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:57.284 16:13:23 -- pm/common@44 -- $ pid=2539219 00:01:57.284 16:13:23 -- pm/common@50 -- $ kill -TERM 2539219 00:01:57.284 16:13:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.284 16:13:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:57.284 16:13:23 -- pm/common@44 -- $ pid=2539246 00:01:57.284 16:13:23 -- pm/common@50 -- $ sudo -E kill -TERM 2539246 00:01:57.284 16:13:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:01:57.284 16:13:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.284 16:13:23 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:01:57.284 16:13:23 -- common/autotest_common.sh@1693 -- # lcov --version 00:01:57.284 16:13:23 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:01:57.284 16:13:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:01:57.284 16:13:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:01:57.284 16:13:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:01:57.284 16:13:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:01:57.284 16:13:24 -- scripts/common.sh@336 -- # IFS=.-: 00:01:57.284 16:13:24 -- scripts/common.sh@336 -- # read -ra ver1 00:01:57.284 16:13:24 -- scripts/common.sh@337 -- # IFS=.-: 00:01:57.284 16:13:24 -- scripts/common.sh@337 -- # read -ra ver2 00:01:57.284 16:13:24 -- scripts/common.sh@338 -- # local 'op=<' 00:01:57.284 16:13:24 -- scripts/common.sh@340 -- # ver1_l=2 00:01:57.284 16:13:24 -- scripts/common.sh@341 -- # ver2_l=1 00:01:57.284 16:13:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:01:57.284 16:13:24 -- scripts/common.sh@344 -- # case "$op" in 00:01:57.284 16:13:24 -- scripts/common.sh@345 -- # : 1 00:01:57.284 16:13:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:01:57.284 16:13:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.284 16:13:24 -- scripts/common.sh@365 -- # decimal 1 00:01:57.284 16:13:24 -- scripts/common.sh@353 -- # local d=1 00:01:57.284 16:13:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:01:57.284 16:13:24 -- scripts/common.sh@355 -- # echo 1 00:01:57.284 16:13:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:01:57.284 16:13:24 -- scripts/common.sh@366 -- # decimal 2 00:01:57.284 16:13:24 -- scripts/common.sh@353 -- # local d=2 00:01:57.284 16:13:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:01:57.284 16:13:24 -- scripts/common.sh@355 -- # echo 2 00:01:57.284 16:13:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:01:57.284 16:13:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:01:57.284 16:13:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:01:57.284 16:13:24 -- scripts/common.sh@368 -- # return 0 00:01:57.284 16:13:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:01:57.284 16:13:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:01:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:57.284 --rc genhtml_branch_coverage=1 00:01:57.284 --rc genhtml_function_coverage=1 00:01:57.284 --rc genhtml_legend=1 00:01:57.284 --rc geninfo_all_blocks=1 00:01:57.284 --rc geninfo_unexecuted_blocks=1 00:01:57.284 00:01:57.284 ' 00:01:57.284 16:13:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:01:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:57.284 --rc genhtml_branch_coverage=1 00:01:57.284 --rc genhtml_function_coverage=1 00:01:57.284 --rc genhtml_legend=1 00:01:57.284 --rc geninfo_all_blocks=1 00:01:57.284 --rc geninfo_unexecuted_blocks=1 00:01:57.284 00:01:57.284 ' 00:01:57.284 16:13:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:01:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:57.284 --rc genhtml_branch_coverage=1 00:01:57.284 --rc genhtml_function_coverage=1 00:01:57.284 --rc genhtml_legend=1 00:01:57.284 --rc geninfo_all_blocks=1 00:01:57.284 --rc geninfo_unexecuted_blocks=1 00:01:57.284 00:01:57.284 ' 00:01:57.284 16:13:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:01:57.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:57.284 --rc genhtml_branch_coverage=1 00:01:57.284 --rc genhtml_function_coverage=1 00:01:57.284 --rc genhtml_legend=1 00:01:57.284 --rc geninfo_all_blocks=1 00:01:57.284 --rc geninfo_unexecuted_blocks=1 00:01:57.284 00:01:57.284 ' 00:01:57.284 16:13:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:57.284 16:13:24 -- nvmf/common.sh@7 -- # uname -s 00:01:57.284 16:13:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:57.284 16:13:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:57.284 16:13:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:57.284 16:13:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:57.284 16:13:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:57.284 16:13:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:57.284 16:13:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:57.284 16:13:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:57.284 16:13:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:57.284 16:13:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:57.284 16:13:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:01:57.284 16:13:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:01:57.284 16:13:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:57.284 16:13:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:57.284 16:13:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:57.284 16:13:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:57.284 16:13:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.284 16:13:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:01:57.284 16:13:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:57.284 16:13:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.284 16:13:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.284 16:13:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.284 16:13:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.284 16:13:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.284 16:13:24 -- paths/export.sh@5 -- # export PATH 00:01:57.284 16:13:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.284 16:13:24 -- nvmf/common.sh@51 -- # : 0 00:01:57.284 16:13:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:01:57.284 16:13:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:01:57.284 16:13:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:57.284 16:13:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:57.284 16:13:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:57.284 16:13:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:01:57.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:01:57.284 16:13:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:01:57.284 16:13:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:01:57.284 16:13:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:01:57.284 16:13:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:57.284 16:13:24 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.284 16:13:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.284 16:13:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.284 16:13:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.284 16:13:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.284 16:13:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.285 16:13:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.285 16:13:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.285 16:13:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.285 16:13:24 -- spdk/autotest.sh@48 -- # udevadm_pid=2601792 00:01:57.285 16:13:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:57.285 16:13:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.285 16:13:24 -- pm/common@17 -- # local monitor 00:01:57.285 16:13:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.285 16:13:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.285 16:13:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.285 16:13:24 -- pm/common@21 -- # date +%s 00:01:57.285 16:13:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.285 16:13:24 -- pm/common@21 -- # date +%s 00:01:57.285 16:13:24 -- pm/common@25 -- # sleep 1 00:01:57.285 16:13:24 -- pm/common@21 -- # date +%s 00:01:57.285 16:13:24 -- pm/common@21 -- # date +%s 00:01:57.285 16:13:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730733204 00:01:57.285 16:13:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730733204 00:01:57.285 16:13:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730733204 00:01:57.285 16:13:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730733204 00:01:57.543 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730733204_collect-vmstat.pm.log 00:01:57.543 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730733204_collect-cpu-load.pm.log 00:01:57.543 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730733204_collect-cpu-temp.pm.log 00:01:57.543 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730733204_collect-bmc-pm.bmc.pm.log 00:01:58.480 16:13:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.480 16:13:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:58.480 16:13:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:01:58.480 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:01:58.480 16:13:25 -- spdk/autotest.sh@59 -- # create_test_list 00:01:58.480 16:13:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:01:58.480 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:01:58.480 16:13:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.480 16:13:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.480 16:13:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.480 16:13:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.480 16:13:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.480 16:13:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:58.480 16:13:25 -- common/autotest_common.sh@1457 -- # uname 00:01:58.480 16:13:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:01:58.480 16:13:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:58.480 16:13:25 -- common/autotest_common.sh@1477 -- # uname 00:01:58.480 16:13:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:01:58.480 16:13:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:01:58.480 16:13:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:01:58.480 lcov: LCOV version 1.15 00:01:58.480 16:13:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:16.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:16.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:23.233 16:13:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:23.233 16:13:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:23.233 16:13:50 -- common/autotest_common.sh@10 -- # set +x 00:02:23.233 16:13:50 -- spdk/autotest.sh@78 -- # rm -f 00:02:23.233 16:13:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.514 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:26.514 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:26.514 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:26.514 16:13:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:26.514 16:13:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:26.514 16:13:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:26.514 16:13:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:26.514 16:13:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:26.514 16:13:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:26.514 16:13:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:26.514 16:13:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.514 16:13:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:26.514 16:13:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:26.514 16:13:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:26.514 16:13:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:26.514 16:13:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:26.514 16:13:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:26.514 16:13:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:26.514 No valid GPT data, bailing 00:02:26.514 16:13:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:26.514 16:13:53 -- scripts/common.sh@394 -- # pt= 00:02:26.514 16:13:53 -- scripts/common.sh@395 -- # return 1 00:02:26.514 16:13:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:26.514 1+0 records in 00:02:26.514 1+0 records out 00:02:26.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00173475 s, 604 MB/s 00:02:26.514 16:13:53 -- spdk/autotest.sh@105 -- # sync 00:02:26.514 16:13:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:26.514 16:13:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:26.514 16:13:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:31.778 16:13:58 -- spdk/autotest.sh@111 -- # uname -s 00:02:31.778 16:13:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:31.778 16:13:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:31.778 16:13:58 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:34.305 Hugepages 00:02:34.305 node hugesize free / total 00:02:34.305 node0 1048576kB 0 / 0 00:02:34.305 node0 2048kB 0 / 0 00:02:34.305 node1 1048576kB 0 / 0 00:02:34.305 node1 2048kB 0 / 0 00:02:34.305 00:02:34.305 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.305 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:34.305 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:34.306 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:34.306 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:34.306 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:34.306 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:34.306 16:14:00 -- spdk/autotest.sh@117 -- # uname -s 00:02:34.306 16:14:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:34.306 16:14:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:34.306 16:14:00 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.835 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:36.835 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:38.214 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:38.214 16:14:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:39.150 16:14:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:39.150 16:14:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:39.150 16:14:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:39.150 16:14:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:39.150 16:14:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:39.150 16:14:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:39.150 16:14:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:39.150 16:14:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:39.150 16:14:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:39.150 16:14:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:39.150 16:14:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:39.150 16:14:05 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.682 Waiting for block devices as requested 00:02:41.682 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:02:41.682 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:02:41.682 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:02:41.682 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:02:41.940 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:02:41.940 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:02:41.940 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:02:41.940 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:02:42.200 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:02:42.200 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:02:42.200 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:02:42.459 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:02:42.459 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:02:42.459 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:02:42.459 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:02:42.718 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:02:42.718 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:02:42.718 16:14:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:02:42.718 16:14:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:02:42.718 16:14:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:02:42.718 16:14:09 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:02:42.718 16:14:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:02:42.719 16:14:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:02:42.719 16:14:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:02:42.719 16:14:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:02:42.719 16:14:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:02:42.719 16:14:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:02:42.719 16:14:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:02:42.719 16:14:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:02:42.719 16:14:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:02:42.977 16:14:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:02:42.977 16:14:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:02:42.977 16:14:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:02:42.977 16:14:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:02:42.977 16:14:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:02:42.977 16:14:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:02:42.977 16:14:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:02:42.977 16:14:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:02:42.977 16:14:09 -- common/autotest_common.sh@1543 -- # continue 00:02:42.977 16:14:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:02:42.977 16:14:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:42.977 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:02:42.977 16:14:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:02:42.977 16:14:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.977 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:02:42.977 16:14:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:45.511 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:45.511 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:46.449 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:46.708 16:14:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:02:46.708 16:14:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:46.708 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:02:46.708 16:14:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:02:46.708 16:14:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:02:46.708 16:14:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:02:46.708 16:14:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:02:46.708 16:14:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:02:46.708 16:14:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:02:46.708 16:14:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:02:46.708 16:14:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:02:46.708 16:14:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:46.708 16:14:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:46.708 16:14:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:46.708 16:14:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:46.708 16:14:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:46.708 16:14:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:46.708 16:14:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:46.708 16:14:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:02:46.708 16:14:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:02:46.708 16:14:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:02:46.708 16:14:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:02:46.708 16:14:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:02:46.708 16:14:13 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:02:46.708 16:14:13 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:02:46.708 16:14:13 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:02:46.708 16:14:13 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2615971 00:02:46.708 16:14:13 -- common/autotest_common.sh@1585 -- # waitforlisten 2615971 00:02:46.708 16:14:13 -- common/autotest_common.sh@835 -- # '[' -z 2615971 ']' 00:02:46.708 16:14:13 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:46.708 16:14:13 -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:46.708 16:14:13 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:46.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:46.708 16:14:13 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:46.708 16:14:13 -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:46.708 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:02:46.968 [2024-11-04 16:14:13.577754] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:02:46.968 [2024-11-04 16:14:13.577801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615971 ] 00:02:46.968 [2024-11-04 16:14:13.639616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:46.968 [2024-11-04 16:14:13.681374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:47.226 16:14:13 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:47.226 16:14:13 -- common/autotest_common.sh@868 -- # return 0 00:02:47.226 16:14:13 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:02:47.226 16:14:13 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:02:47.226 16:14:13 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:02:50.531 nvme0n1 00:02:50.531 16:14:16 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:02:50.531 [2024-11-04 16:14:17.055504] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:02:50.531 request: 00:02:50.531 { 00:02:50.531 "nvme_ctrlr_name": "nvme0", 00:02:50.531 "password": "test", 00:02:50.531 "method": "bdev_nvme_opal_revert", 00:02:50.531 "req_id": 1 00:02:50.531 } 00:02:50.531 Got JSON-RPC error response 00:02:50.531 response: 00:02:50.531 { 00:02:50.531 "code": -32602, 00:02:50.531 "message": "Invalid parameters" 00:02:50.531 } 00:02:50.531 16:14:17 -- common/autotest_common.sh@1591 -- # true 00:02:50.531 16:14:17 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:02:50.531 16:14:17 -- common/autotest_common.sh@1595 -- # killprocess 2615971 00:02:50.531 16:14:17 -- common/autotest_common.sh@954 -- # '[' -z 2615971 ']' 00:02:50.531 16:14:17 -- common/autotest_common.sh@958 -- # kill -0 2615971 00:02:50.531 16:14:17 -- common/autotest_common.sh@959 -- # uname 00:02:50.531 16:14:17 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:50.531 16:14:17 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2615971 00:02:50.531 16:14:17 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:50.531 16:14:17 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:50.531 16:14:17 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2615971' 00:02:50.531 killing process with pid 2615971 00:02:50.531 16:14:17 -- common/autotest_common.sh@973 -- # kill 2615971 00:02:50.531 16:14:17 -- common/autotest_common.sh@978 -- # wait 2615971 00:02:53.063 16:14:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:02:53.063 16:14:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:02:53.063 16:14:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:53.063 16:14:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:53.063 16:14:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:02:53.063 16:14:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:53.063 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:02:53.064 16:14:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:02:53.064 16:14:19 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:53.064 16:14:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:53.064 16:14:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:53.064 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:02:53.064 ************************************ 00:02:53.064 START TEST env 00:02:53.064 ************************************ 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:53.064 * Looking for test storage... 00:02:53.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:53.064 16:14:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.064 16:14:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.064 16:14:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.064 16:14:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.064 16:14:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.064 16:14:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.064 16:14:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.064 16:14:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.064 16:14:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.064 16:14:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.064 16:14:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.064 16:14:19 env -- scripts/common.sh@344 -- # case "$op" in 00:02:53.064 16:14:19 env -- scripts/common.sh@345 -- # : 1 00:02:53.064 16:14:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.064 16:14:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.064 16:14:19 env -- scripts/common.sh@365 -- # decimal 1 00:02:53.064 16:14:19 env -- scripts/common.sh@353 -- # local d=1 00:02:53.064 16:14:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.064 16:14:19 env -- scripts/common.sh@355 -- # echo 1 00:02:53.064 16:14:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.064 16:14:19 env -- scripts/common.sh@366 -- # decimal 2 00:02:53.064 16:14:19 env -- scripts/common.sh@353 -- # local d=2 00:02:53.064 16:14:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.064 16:14:19 env -- scripts/common.sh@355 -- # echo 2 00:02:53.064 16:14:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.064 16:14:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.064 16:14:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.064 16:14:19 env -- scripts/common.sh@368 -- # return 0 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.064 --rc genhtml_branch_coverage=1 00:02:53.064 --rc genhtml_function_coverage=1 00:02:53.064 --rc genhtml_legend=1 00:02:53.064 --rc geninfo_all_blocks=1 00:02:53.064 --rc geninfo_unexecuted_blocks=1 00:02:53.064 00:02:53.064 ' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.064 --rc genhtml_branch_coverage=1 00:02:53.064 --rc genhtml_function_coverage=1 00:02:53.064 --rc genhtml_legend=1 00:02:53.064 --rc geninfo_all_blocks=1 00:02:53.064 --rc geninfo_unexecuted_blocks=1 00:02:53.064 00:02:53.064 ' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.064 --rc genhtml_branch_coverage=1 00:02:53.064 --rc genhtml_function_coverage=1 00:02:53.064 --rc genhtml_legend=1 00:02:53.064 --rc geninfo_all_blocks=1 00:02:53.064 --rc geninfo_unexecuted_blocks=1 00:02:53.064 00:02:53.064 ' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.064 --rc genhtml_branch_coverage=1 00:02:53.064 --rc genhtml_function_coverage=1 00:02:53.064 --rc genhtml_legend=1 00:02:53.064 --rc geninfo_all_blocks=1 00:02:53.064 --rc geninfo_unexecuted_blocks=1 00:02:53.064 00:02:53.064 ' 00:02:53.064 16:14:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:53.064 16:14:19 env -- common/autotest_common.sh@10 -- # set +x 00:02:53.064 ************************************ 00:02:53.064 START TEST env_memory 00:02:53.064 ************************************ 00:02:53.064 16:14:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:53.064 00:02:53.064 00:02:53.064 CUnit - A unit testing framework for C - Version 2.1-3 00:02:53.064 http://cunit.sourceforge.net/ 00:02:53.064 00:02:53.064 00:02:53.064 Suite: memory 00:02:53.064 Test: alloc and free memory map ...[2024-11-04 16:14:19.556636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:02:53.064 passed 00:02:53.064 Test: mem map translation ...[2024-11-04 16:14:19.575840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:02:53.064 [2024-11-04 16:14:19.575853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:02:53.064 [2024-11-04 16:14:19.575886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:02:53.064 [2024-11-04 16:14:19.575892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:02:53.064 passed 00:02:53.064 Test: mem map registration ...[2024-11-04 16:14:19.613630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:02:53.064 [2024-11-04 16:14:19.613643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:02:53.064 passed 00:02:53.064 Test: mem map adjacent registrations ...passed 00:02:53.064 00:02:53.064 Run Summary: Type Total Ran Passed Failed Inactive 00:02:53.064 suites 1 1 n/a 0 0 00:02:53.064 tests 4 4 4 0 0 00:02:53.064 asserts 152 152 152 0 n/a 00:02:53.064 00:02:53.064 Elapsed time = 0.139 seconds 00:02:53.064 00:02:53.064 real 0m0.152s 00:02:53.064 user 0m0.143s 00:02:53.064 sys 0m0.008s 00:02:53.064 16:14:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:53.064 16:14:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:02:53.064 ************************************ 00:02:53.064 END TEST env_memory 00:02:53.064 ************************************ 00:02:53.064 16:14:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:53.064 16:14:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:53.064 16:14:19 env -- common/autotest_common.sh@10 -- # set +x 00:02:53.064 ************************************ 00:02:53.064 START TEST env_vtophys 00:02:53.064 ************************************ 00:02:53.064 16:14:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:53.064 EAL: lib.eal log level changed from notice to debug 00:02:53.064 EAL: Detected lcore 0 as core 0 on socket 0 00:02:53.064 EAL: Detected lcore 1 as core 1 on socket 0 00:02:53.064 EAL: Detected lcore 2 as core 2 on socket 0 00:02:53.064 EAL: Detected lcore 3 as core 3 on socket 0 00:02:53.064 EAL: Detected lcore 4 as core 4 on socket 0 00:02:53.064 EAL: Detected lcore 5 as core 5 on socket 0 00:02:53.064 EAL: Detected lcore 6 as core 6 on socket 0 00:02:53.064 EAL: Detected lcore 7 as core 8 on socket 0 00:02:53.064 EAL: Detected lcore 8 as core 9 on socket 0 00:02:53.064 EAL: Detected lcore 9 as core 10 on socket 0 00:02:53.064 EAL: Detected lcore 10 as core 11 on socket 0 00:02:53.064 EAL: Detected lcore 11 as core 12 on socket 0 00:02:53.064 EAL: Detected lcore 12 as core 13 on socket 0 00:02:53.064 EAL: Detected lcore 13 as core 16 on socket 0 00:02:53.064 EAL: Detected lcore 14 as core 17 on socket 0 00:02:53.064 EAL: Detected lcore 15 as core 18 on socket 0 00:02:53.064 EAL: Detected lcore 16 as core 19 on socket 0 00:02:53.064 EAL: Detected lcore 17 as core 20 on socket 0 00:02:53.064 EAL: Detected lcore 18 as core 21 on socket 0 00:02:53.064 EAL: Detected lcore 19 as core 25 on socket 0 00:02:53.064 EAL: Detected lcore 20 as core 26 on socket 0 00:02:53.064 EAL: Detected lcore 21 as core 27 on socket 0 00:02:53.064 EAL: Detected lcore 22 as core 28 on socket 0 00:02:53.064 EAL: Detected lcore 23 as core 29 on socket 0 00:02:53.064 EAL: Detected lcore 24 as core 0 on socket 1 00:02:53.064 EAL: Detected lcore 25 as core 1 on socket 1 00:02:53.064 EAL: Detected lcore 26 as core 2 on socket 1 00:02:53.064 EAL: Detected lcore 27 as core 3 on socket 1 00:02:53.064 EAL: Detected lcore 28 as core 4 on socket 1 00:02:53.064 EAL: Detected lcore 29 as core 5 on socket 1 00:02:53.064 EAL: Detected lcore 30 as core 6 on socket 1 00:02:53.064 EAL: Detected lcore 31 as core 8 on socket 1 00:02:53.064 EAL: Detected lcore 32 as core 10 on socket 1 00:02:53.064 EAL: Detected lcore 33 as core 11 on socket 1 00:02:53.064 EAL: Detected lcore 34 as core 12 on socket 1 00:02:53.064 EAL: Detected lcore 35 as core 13 on socket 1 00:02:53.064 EAL: Detected lcore 36 as core 16 on socket 1 00:02:53.064 EAL: Detected lcore 37 as core 17 on socket 1 00:02:53.065 EAL: Detected lcore 38 as core 18 on socket 1 00:02:53.065 EAL: Detected lcore 39 as core 19 on socket 1 00:02:53.065 EAL: Detected lcore 40 as core 20 on socket 1 00:02:53.065 EAL: Detected lcore 41 as core 21 on socket 1 00:02:53.065 EAL: Detected lcore 42 as core 24 on socket 1 00:02:53.065 EAL: Detected lcore 43 as core 25 on socket 1 00:02:53.065 EAL: Detected lcore 44 as core 26 on socket 1 00:02:53.065 EAL: Detected lcore 45 as core 27 on socket 1 00:02:53.065 EAL: Detected lcore 46 as core 28 on socket 1 00:02:53.065 EAL: Detected lcore 47 as core 29 on socket 1 00:02:53.065 EAL: Detected lcore 48 as core 0 on socket 0 00:02:53.065 EAL: Detected lcore 49 as core 1 on socket 0 00:02:53.065 EAL: Detected lcore 50 as core 2 on socket 0 00:02:53.065 EAL: Detected lcore 51 as core 3 on socket 0 00:02:53.065 EAL: Detected lcore 52 as core 4 on socket 0 00:02:53.065 EAL: Detected lcore 53 as core 5 on socket 0 00:02:53.065 EAL: Detected lcore 54 as core 6 on socket 0 00:02:53.065 EAL: Detected lcore 55 as core 8 on socket 0 00:02:53.065 EAL: Detected lcore 56 as core 9 on socket 0 00:02:53.065 EAL: Detected lcore 57 as core 10 on socket 0 00:02:53.065 EAL: Detected lcore 58 as core 11 on socket 0 00:02:53.065 EAL: Detected lcore 59 as core 12 on socket 0 00:02:53.065 EAL: Detected lcore 60 as core 13 on socket 0 00:02:53.065 EAL: Detected lcore 61 as core 16 on socket 0 00:02:53.065 EAL: Detected lcore 62 as core 17 on socket 0 00:02:53.065 EAL: Detected lcore 63 as core 18 on socket 0 00:02:53.065 EAL: Detected lcore 64 as core 19 on socket 0 00:02:53.065 EAL: Detected lcore 65 as core 20 on socket 0 00:02:53.065 EAL: Detected lcore 66 as core 21 on socket 0 00:02:53.065 EAL: Detected lcore 67 as core 25 on socket 0 00:02:53.065 EAL: Detected lcore 68 as core 26 on socket 0 00:02:53.065 EAL: Detected lcore 69 as core 27 on socket 0 00:02:53.065 EAL: Detected lcore 70 as core 28 on socket 0 00:02:53.065 EAL: Detected lcore 71 as core 29 on socket 0 00:02:53.065 EAL: Detected lcore 72 as core 0 on socket 1 00:02:53.065 EAL: Detected lcore 73 as core 1 on socket 1 00:02:53.065 EAL: Detected lcore 74 as core 2 on socket 1 00:02:53.065 EAL: Detected lcore 75 as core 3 on socket 1 00:02:53.065 EAL: Detected lcore 76 as core 4 on socket 1 00:02:53.065 EAL: Detected lcore 77 as core 5 on socket 1 00:02:53.065 EAL: Detected lcore 78 as core 6 on socket 1 00:02:53.065 EAL: Detected lcore 79 as core 8 on socket 1 00:02:53.065 EAL: Detected lcore 80 as core 10 on socket 1 00:02:53.065 EAL: Detected lcore 81 as core 11 on socket 1 00:02:53.065 EAL: Detected lcore 82 as core 12 on socket 1 00:02:53.065 EAL: Detected lcore 83 as core 13 on socket 1 00:02:53.065 EAL: Detected lcore 84 as core 16 on socket 1 00:02:53.065 EAL: Detected lcore 85 as core 17 on socket 1 00:02:53.065 EAL: Detected lcore 86 as core 18 on socket 1 00:02:53.065 EAL: Detected lcore 87 as core 19 on socket 1 00:02:53.065 EAL: Detected lcore 88 as core 20 on socket 1 00:02:53.065 EAL: Detected lcore 89 as core 21 on socket 1 00:02:53.065 EAL: Detected lcore 90 as core 24 on socket 1 00:02:53.065 EAL: Detected lcore 91 as core 25 on socket 1 00:02:53.065 EAL: Detected lcore 92 as core 26 on socket 1 00:02:53.065 EAL: Detected lcore 93 as core 27 on socket 1 00:02:53.065 EAL: Detected lcore 94 as core 28 on socket 1 00:02:53.065 EAL: Detected lcore 95 as core 29 on socket 1 00:02:53.065 EAL: Maximum logical cores by configuration: 128 00:02:53.065 EAL: Detected CPU lcores: 96 00:02:53.065 EAL: Detected NUMA nodes: 2 00:02:53.065 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:02:53.065 EAL: Detected shared linkage of DPDK 00:02:53.065 EAL: No shared files mode enabled, IPC will be disabled 00:02:53.065 EAL: Bus pci wants IOVA as 'DC' 00:02:53.065 EAL: Buses did not request a specific IOVA mode. 00:02:53.065 EAL: IOMMU is available, selecting IOVA as VA mode. 00:02:53.065 EAL: Selected IOVA mode 'VA' 00:02:53.065 EAL: Probing VFIO support... 00:02:53.065 EAL: IOMMU type 1 (Type 1) is supported 00:02:53.065 EAL: IOMMU type 7 (sPAPR) is not supported 00:02:53.065 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:02:53.065 EAL: VFIO support initialized 00:02:53.065 EAL: Ask a virtual area of 0x2e000 bytes 00:02:53.065 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:02:53.065 EAL: Setting up physically contiguous memory... 00:02:53.065 EAL: Setting maximum number of open files to 524288 00:02:53.065 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:02:53.065 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:02:53.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:02:53.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:02:53.065 EAL: Ask a virtual area of 0x61000 bytes 00:02:53.065 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:02:53.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:53.065 EAL: Ask a virtual area of 0x400000000 bytes 00:02:53.065 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:02:53.065 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:02:53.065 EAL: Hugepages will be freed exactly as allocated. 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: TSC frequency is ~2100000 KHz 00:02:53.065 EAL: Main lcore 0 is ready (tid=7f3196457a00;cpuset=[0]) 00:02:53.065 EAL: Trying to obtain current memory policy. 00:02:53.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.065 EAL: Restoring previous memory policy: 0 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was expanded by 2MB 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: No PCI address specified using 'addr=' in: bus=pci 00:02:53.065 EAL: Mem event callback 'spdk:(nil)' registered 00:02:53.065 00:02:53.065 00:02:53.065 CUnit - A unit testing framework for C - Version 2.1-3 00:02:53.065 http://cunit.sourceforge.net/ 00:02:53.065 00:02:53.065 00:02:53.065 Suite: components_suite 00:02:53.065 Test: vtophys_malloc_test ...passed 00:02:53.065 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:02:53.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.065 EAL: Restoring previous memory policy: 4 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was expanded by 4MB 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was shrunk by 4MB 00:02:53.065 EAL: Trying to obtain current memory policy. 00:02:53.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.065 EAL: Restoring previous memory policy: 4 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was expanded by 6MB 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was shrunk by 6MB 00:02:53.065 EAL: Trying to obtain current memory policy. 00:02:53.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.065 EAL: Restoring previous memory policy: 4 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was expanded by 10MB 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.065 EAL: Heap on socket 0 was shrunk by 10MB 00:02:53.065 EAL: Trying to obtain current memory policy. 00:02:53.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.065 EAL: Restoring previous memory policy: 4 00:02:53.065 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.065 EAL: request: mp_malloc_sync 00:02:53.065 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was expanded by 18MB 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was shrunk by 18MB 00:02:53.066 EAL: Trying to obtain current memory policy. 00:02:53.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.066 EAL: Restoring previous memory policy: 4 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was expanded by 34MB 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was shrunk by 34MB 00:02:53.066 EAL: Trying to obtain current memory policy. 00:02:53.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.066 EAL: Restoring previous memory policy: 4 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was expanded by 66MB 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was shrunk by 66MB 00:02:53.066 EAL: Trying to obtain current memory policy. 00:02:53.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.066 EAL: Restoring previous memory policy: 4 00:02:53.066 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.066 EAL: request: mp_malloc_sync 00:02:53.066 EAL: No shared files mode enabled, IPC is disabled 00:02:53.066 EAL: Heap on socket 0 was expanded by 130MB 00:02:53.323 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.323 EAL: request: mp_malloc_sync 00:02:53.323 EAL: No shared files mode enabled, IPC is disabled 00:02:53.323 EAL: Heap on socket 0 was shrunk by 130MB 00:02:53.323 EAL: Trying to obtain current memory policy. 00:02:53.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.323 EAL: Restoring previous memory policy: 4 00:02:53.323 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.323 EAL: request: mp_malloc_sync 00:02:53.323 EAL: No shared files mode enabled, IPC is disabled 00:02:53.323 EAL: Heap on socket 0 was expanded by 258MB 00:02:53.323 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.323 EAL: request: mp_malloc_sync 00:02:53.324 EAL: No shared files mode enabled, IPC is disabled 00:02:53.324 EAL: Heap on socket 0 was shrunk by 258MB 00:02:53.324 EAL: Trying to obtain current memory policy. 00:02:53.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.582 EAL: Restoring previous memory policy: 4 00:02:53.582 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.582 EAL: request: mp_malloc_sync 00:02:53.582 EAL: No shared files mode enabled, IPC is disabled 00:02:53.582 EAL: Heap on socket 0 was expanded by 514MB 00:02:53.582 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.582 EAL: request: mp_malloc_sync 00:02:53.582 EAL: No shared files mode enabled, IPC is disabled 00:02:53.582 EAL: Heap on socket 0 was shrunk by 514MB 00:02:53.582 EAL: Trying to obtain current memory policy. 00:02:53.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:53.841 EAL: Restoring previous memory policy: 4 00:02:53.841 EAL: Calling mem event callback 'spdk:(nil)' 00:02:53.841 EAL: request: mp_malloc_sync 00:02:53.841 EAL: No shared files mode enabled, IPC is disabled 00:02:53.841 EAL: Heap on socket 0 was expanded by 1026MB 00:02:53.841 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.101 EAL: request: mp_malloc_sync 00:02:54.101 EAL: No shared files mode enabled, IPC is disabled 00:02:54.101 EAL: Heap on socket 0 was shrunk by 1026MB 00:02:54.101 passed 00:02:54.101 00:02:54.101 Run Summary: Type Total Ran Passed Failed Inactive 00:02:54.101 suites 1 1 n/a 0 0 00:02:54.101 tests 2 2 2 0 0 00:02:54.101 asserts 497 497 497 0 n/a 00:02:54.101 00:02:54.101 Elapsed time = 0.960 seconds 00:02:54.101 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.101 EAL: request: mp_malloc_sync 00:02:54.101 EAL: No shared files mode enabled, IPC is disabled 00:02:54.101 EAL: Heap on socket 0 was shrunk by 2MB 00:02:54.101 EAL: No shared files mode enabled, IPC is disabled 00:02:54.101 EAL: No shared files mode enabled, IPC is disabled 00:02:54.101 EAL: No shared files mode enabled, IPC is disabled 00:02:54.101 00:02:54.101 real 0m1.079s 00:02:54.101 user 0m0.638s 00:02:54.101 sys 0m0.413s 00:02:54.101 16:14:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:54.101 16:14:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:02:54.101 ************************************ 00:02:54.101 END TEST env_vtophys 00:02:54.101 ************************************ 00:02:54.101 16:14:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:54.101 16:14:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.101 16:14:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.101 16:14:20 env -- common/autotest_common.sh@10 -- # set +x 00:02:54.101 ************************************ 00:02:54.101 START TEST env_pci 00:02:54.101 ************************************ 00:02:54.101 16:14:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:54.101 00:02:54.101 00:02:54.101 CUnit - A unit testing framework for C - Version 2.1-3 00:02:54.101 http://cunit.sourceforge.net/ 00:02:54.101 00:02:54.101 00:02:54.101 Suite: pci 00:02:54.101 Test: pci_hook ...[2024-11-04 16:14:20.899539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2617288 has claimed it 00:02:54.360 EAL: Cannot find device (10000:00:01.0) 00:02:54.360 EAL: Failed to attach device on primary process 00:02:54.360 passed 00:02:54.360 00:02:54.361 Run Summary: Type Total Ran Passed Failed Inactive 00:02:54.361 suites 1 1 n/a 0 0 00:02:54.361 tests 1 1 1 0 0 00:02:54.361 asserts 25 25 25 0 n/a 00:02:54.361 00:02:54.361 Elapsed time = 0.027 seconds 00:02:54.361 00:02:54.361 real 0m0.048s 00:02:54.361 user 0m0.020s 00:02:54.361 sys 0m0.027s 00:02:54.361 16:14:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:54.361 16:14:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:02:54.361 ************************************ 00:02:54.361 END TEST env_pci 00:02:54.361 ************************************ 00:02:54.361 16:14:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:02:54.361 16:14:20 env -- env/env.sh@15 -- # uname 00:02:54.361 16:14:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:02:54.361 16:14:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:02:54.361 16:14:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:54.361 16:14:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:02:54.361 16:14:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.361 16:14:20 env -- common/autotest_common.sh@10 -- # set +x 00:02:54.361 ************************************ 00:02:54.361 START TEST env_dpdk_post_init 00:02:54.361 ************************************ 00:02:54.361 16:14:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:54.361 EAL: Detected CPU lcores: 96 00:02:54.361 EAL: Detected NUMA nodes: 2 00:02:54.361 EAL: Detected shared linkage of DPDK 00:02:54.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:54.361 EAL: Selected IOVA mode 'VA' 00:02:54.361 EAL: VFIO support initialized 00:02:54.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:54.361 EAL: Using IOMMU type 1 (Type 1) 00:02:54.361 EAL: Ignore mapping IO port bar(1) 00:02:54.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:02:54.361 EAL: Ignore mapping IO port bar(1) 00:02:54.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:02:54.361 EAL: Ignore mapping IO port bar(1) 00:02:54.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:02:54.361 EAL: Ignore mapping IO port bar(1) 00:02:54.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:02:54.620 EAL: Ignore mapping IO port bar(1) 00:02:54.620 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:02:54.620 EAL: Ignore mapping IO port bar(1) 00:02:54.620 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:02:54.620 EAL: Ignore mapping IO port bar(1) 00:02:54.620 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:02:54.620 EAL: Ignore mapping IO port bar(1) 00:02:54.620 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:02:55.188 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:02:55.188 EAL: Ignore mapping IO port bar(1) 00:02:55.188 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:02:55.188 EAL: Ignore mapping IO port bar(1) 00:02:55.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:02:55.189 EAL: Ignore mapping IO port bar(1) 00:02:55.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:02:55.189 EAL: Ignore mapping IO port bar(1) 00:02:55.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:02:55.447 EAL: Ignore mapping IO port bar(1) 00:02:55.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:02:55.447 EAL: Ignore mapping IO port bar(1) 00:02:55.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:02:55.447 EAL: Ignore mapping IO port bar(1) 00:02:55.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:02:55.447 EAL: Ignore mapping IO port bar(1) 00:02:55.447 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:02:58.731 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:02:58.731 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:02:59.298 Starting DPDK initialization... 00:02:59.298 Starting SPDK post initialization... 00:02:59.298 SPDK NVMe probe 00:02:59.298 Attaching to 0000:5e:00.0 00:02:59.298 Attached to 0000:5e:00.0 00:02:59.298 Cleaning up... 00:02:59.298 00:02:59.298 real 0m4.888s 00:02:59.298 user 0m3.443s 00:02:59.298 sys 0m0.517s 00:02:59.298 16:14:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.298 16:14:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:02:59.298 ************************************ 00:02:59.298 END TEST env_dpdk_post_init 00:02:59.298 ************************************ 00:02:59.298 16:14:25 env -- env/env.sh@26 -- # uname 00:02:59.298 16:14:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:02:59.298 16:14:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:59.298 16:14:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:59.298 16:14:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:59.298 16:14:25 env -- common/autotest_common.sh@10 -- # set +x 00:02:59.298 ************************************ 00:02:59.298 START TEST env_mem_callbacks 00:02:59.298 ************************************ 00:02:59.298 16:14:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:59.298 EAL: Detected CPU lcores: 96 00:02:59.298 EAL: Detected NUMA nodes: 2 00:02:59.298 EAL: Detected shared linkage of DPDK 00:02:59.298 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:59.298 EAL: Selected IOVA mode 'VA' 00:02:59.298 EAL: VFIO support initialized 00:02:59.298 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:59.298 00:02:59.298 00:02:59.298 CUnit - A unit testing framework for C - Version 2.1-3 00:02:59.298 http://cunit.sourceforge.net/ 00:02:59.298 00:02:59.298 00:02:59.298 Suite: memory 00:02:59.298 Test: test ... 00:02:59.298 register 0x200000200000 2097152 00:02:59.298 malloc 3145728 00:02:59.298 register 0x200000400000 4194304 00:02:59.298 buf 0x200000500000 len 3145728 PASSED 00:02:59.298 malloc 64 00:02:59.298 buf 0x2000004fff40 len 64 PASSED 00:02:59.298 malloc 4194304 00:02:59.298 register 0x200000800000 6291456 00:02:59.298 buf 0x200000a00000 len 4194304 PASSED 00:02:59.298 free 0x200000500000 3145728 00:02:59.298 free 0x2000004fff40 64 00:02:59.298 unregister 0x200000400000 4194304 PASSED 00:02:59.298 free 0x200000a00000 4194304 00:02:59.298 unregister 0x200000800000 6291456 PASSED 00:02:59.298 malloc 8388608 00:02:59.298 register 0x200000400000 10485760 00:02:59.298 buf 0x200000600000 len 8388608 PASSED 00:02:59.298 free 0x200000600000 8388608 00:02:59.298 unregister 0x200000400000 10485760 PASSED 00:02:59.298 passed 00:02:59.298 00:02:59.298 Run Summary: Type Total Ran Passed Failed Inactive 00:02:59.298 suites 1 1 n/a 0 0 00:02:59.298 tests 1 1 1 0 0 00:02:59.298 asserts 15 15 15 0 n/a 00:02:59.298 00:02:59.298 Elapsed time = 0.005 seconds 00:02:59.298 00:02:59.298 real 0m0.057s 00:02:59.298 user 0m0.015s 00:02:59.298 sys 0m0.042s 00:02:59.298 16:14:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.298 16:14:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:02:59.298 ************************************ 00:02:59.298 END TEST env_mem_callbacks 00:02:59.298 ************************************ 00:02:59.298 00:02:59.298 real 0m6.760s 00:02:59.298 user 0m4.492s 00:02:59.298 sys 0m1.345s 00:02:59.298 16:14:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.298 16:14:26 env -- common/autotest_common.sh@10 -- # set +x 00:02:59.298 ************************************ 00:02:59.298 END TEST env 00:02:59.298 ************************************ 00:02:59.298 16:14:26 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:59.298 16:14:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:59.298 16:14:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:59.298 16:14:26 -- common/autotest_common.sh@10 -- # set +x 00:02:59.557 ************************************ 00:02:59.557 START TEST rpc 00:02:59.557 ************************************ 00:02:59.557 16:14:26 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:59.557 * Looking for test storage... 00:02:59.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:59.557 16:14:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:59.557 16:14:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:02:59.557 16:14:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:59.557 16:14:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:59.557 16:14:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:59.557 16:14:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:59.557 16:14:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:59.557 16:14:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.557 16:14:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:59.557 16:14:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:59.557 16:14:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:59.557 16:14:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:59.557 16:14:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:59.558 16:14:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:59.558 16:14:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:59.558 16:14:26 rpc -- scripts/common.sh@345 -- # : 1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:59.558 16:14:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.558 16:14:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@353 -- # local d=1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.558 16:14:26 rpc -- scripts/common.sh@355 -- # echo 1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:59.558 16:14:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:02:59.558 16:14:26 rpc -- scripts/common.sh@353 -- # local d=2 00:02:59.558 16:14:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.558 16:14:26 rpc -- scripts/common.sh@355 -- # echo 2 00:02:59.558 16:14:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:59.558 16:14:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:59.558 16:14:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:59.558 16:14:26 rpc -- scripts/common.sh@368 -- # return 0 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.558 --rc genhtml_branch_coverage=1 00:02:59.558 --rc genhtml_function_coverage=1 00:02:59.558 --rc genhtml_legend=1 00:02:59.558 --rc geninfo_all_blocks=1 00:02:59.558 --rc geninfo_unexecuted_blocks=1 00:02:59.558 00:02:59.558 ' 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.558 --rc genhtml_branch_coverage=1 00:02:59.558 --rc genhtml_function_coverage=1 00:02:59.558 --rc genhtml_legend=1 00:02:59.558 --rc geninfo_all_blocks=1 00:02:59.558 --rc geninfo_unexecuted_blocks=1 00:02:59.558 00:02:59.558 ' 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.558 --rc genhtml_branch_coverage=1 00:02:59.558 --rc genhtml_function_coverage=1 00:02:59.558 --rc genhtml_legend=1 00:02:59.558 --rc geninfo_all_blocks=1 00:02:59.558 --rc geninfo_unexecuted_blocks=1 00:02:59.558 00:02:59.558 ' 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:59.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.558 --rc genhtml_branch_coverage=1 00:02:59.558 --rc genhtml_function_coverage=1 00:02:59.558 --rc genhtml_legend=1 00:02:59.558 --rc geninfo_all_blocks=1 00:02:59.558 --rc geninfo_unexecuted_blocks=1 00:02:59.558 00:02:59.558 ' 00:02:59.558 16:14:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2618339 00:02:59.558 16:14:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:59.558 16:14:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2618339 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 2618339 ']' 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:59.558 16:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:59.558 16:14:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:02:59.558 [2024-11-04 16:14:26.355951] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:02:59.558 [2024-11-04 16:14:26.355996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618339 ] 00:02:59.817 [2024-11-04 16:14:26.420303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:59.817 [2024-11-04 16:14:26.461562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:02:59.817 [2024-11-04 16:14:26.461598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2618339' to capture a snapshot of events at runtime. 00:02:59.817 [2024-11-04 16:14:26.461609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:02:59.817 [2024-11-04 16:14:26.461617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:02:59.817 [2024-11-04 16:14:26.461622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2618339 for offline analysis/debug. 00:02:59.817 [2024-11-04 16:14:26.462192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:00.076 16:14:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:00.076 16:14:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:00.076 16:14:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:00.076 16:14:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:00.076 16:14:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:00.076 16:14:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:00.076 16:14:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.076 16:14:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.076 16:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 ************************************ 00:03:00.076 START TEST rpc_integrity 00:03:00.076 ************************************ 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:00.076 { 00:03:00.076 "name": "Malloc0", 00:03:00.076 "aliases": [ 00:03:00.076 "61c1d531-00b5-4650-9d14-84ce1836e339" 00:03:00.076 ], 00:03:00.076 "product_name": "Malloc disk", 00:03:00.076 "block_size": 512, 00:03:00.076 "num_blocks": 16384, 00:03:00.076 "uuid": "61c1d531-00b5-4650-9d14-84ce1836e339", 00:03:00.076 "assigned_rate_limits": { 00:03:00.076 "rw_ios_per_sec": 0, 00:03:00.076 "rw_mbytes_per_sec": 0, 00:03:00.076 "r_mbytes_per_sec": 0, 00:03:00.076 "w_mbytes_per_sec": 0 00:03:00.076 }, 00:03:00.076 "claimed": false, 00:03:00.076 "zoned": false, 00:03:00.076 "supported_io_types": { 00:03:00.076 "read": true, 00:03:00.076 "write": true, 00:03:00.076 "unmap": true, 00:03:00.076 "flush": true, 00:03:00.076 "reset": true, 00:03:00.076 "nvme_admin": false, 00:03:00.076 "nvme_io": false, 00:03:00.076 "nvme_io_md": false, 00:03:00.076 "write_zeroes": true, 00:03:00.076 "zcopy": true, 00:03:00.076 "get_zone_info": false, 00:03:00.076 "zone_management": false, 00:03:00.076 "zone_append": false, 00:03:00.076 "compare": false, 00:03:00.076 "compare_and_write": false, 00:03:00.076 "abort": true, 00:03:00.076 "seek_hole": false, 00:03:00.076 "seek_data": false, 00:03:00.076 "copy": true, 00:03:00.076 "nvme_iov_md": false 00:03:00.076 }, 00:03:00.076 "memory_domains": [ 00:03:00.076 { 00:03:00.076 "dma_device_id": "system", 00:03:00.076 "dma_device_type": 1 00:03:00.076 }, 00:03:00.076 { 00:03:00.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.076 "dma_device_type": 2 00:03:00.076 } 00:03:00.076 ], 00:03:00.076 "driver_specific": {} 00:03:00.076 } 00:03:00.076 ]' 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 [2024-11-04 16:14:26.831031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:00.076 [2024-11-04 16:14:26.831058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:00.076 [2024-11-04 16:14:26.831070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11bd6d0 00:03:00.076 [2024-11-04 16:14:26.831076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:00.076 [2024-11-04 16:14:26.832126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:00.076 [2024-11-04 16:14:26.832146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:00.076 Passthru0 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.076 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.076 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:00.076 { 00:03:00.076 "name": "Malloc0", 00:03:00.076 "aliases": [ 00:03:00.076 "61c1d531-00b5-4650-9d14-84ce1836e339" 00:03:00.076 ], 00:03:00.076 "product_name": "Malloc disk", 00:03:00.076 "block_size": 512, 00:03:00.076 "num_blocks": 16384, 00:03:00.076 "uuid": "61c1d531-00b5-4650-9d14-84ce1836e339", 00:03:00.076 "assigned_rate_limits": { 00:03:00.076 "rw_ios_per_sec": 0, 00:03:00.076 "rw_mbytes_per_sec": 0, 00:03:00.076 "r_mbytes_per_sec": 0, 00:03:00.076 "w_mbytes_per_sec": 0 00:03:00.076 }, 00:03:00.076 "claimed": true, 00:03:00.076 "claim_type": "exclusive_write", 00:03:00.076 "zoned": false, 00:03:00.076 "supported_io_types": { 00:03:00.076 "read": true, 00:03:00.076 "write": true, 00:03:00.076 "unmap": true, 00:03:00.076 "flush": true, 00:03:00.076 "reset": true, 00:03:00.076 "nvme_admin": false, 00:03:00.076 "nvme_io": false, 00:03:00.076 "nvme_io_md": false, 00:03:00.076 "write_zeroes": true, 00:03:00.076 "zcopy": true, 00:03:00.076 "get_zone_info": false, 00:03:00.076 "zone_management": false, 00:03:00.076 "zone_append": false, 00:03:00.076 "compare": false, 00:03:00.076 "compare_and_write": false, 00:03:00.076 "abort": true, 00:03:00.076 "seek_hole": false, 00:03:00.076 "seek_data": false, 00:03:00.076 "copy": true, 00:03:00.076 "nvme_iov_md": false 00:03:00.076 }, 00:03:00.076 "memory_domains": [ 00:03:00.076 { 00:03:00.076 "dma_device_id": "system", 00:03:00.076 "dma_device_type": 1 00:03:00.076 }, 00:03:00.076 { 00:03:00.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.076 "dma_device_type": 2 00:03:00.076 } 00:03:00.076 ], 00:03:00.076 "driver_specific": {} 00:03:00.076 }, 00:03:00.076 { 00:03:00.076 "name": "Passthru0", 00:03:00.076 "aliases": [ 00:03:00.076 "5c9aae34-a7a7-562f-b035-a480c5ddf340" 00:03:00.076 ], 00:03:00.076 "product_name": "passthru", 00:03:00.076 "block_size": 512, 00:03:00.076 "num_blocks": 16384, 00:03:00.076 "uuid": "5c9aae34-a7a7-562f-b035-a480c5ddf340", 00:03:00.076 "assigned_rate_limits": { 00:03:00.076 "rw_ios_per_sec": 0, 00:03:00.076 "rw_mbytes_per_sec": 0, 00:03:00.076 "r_mbytes_per_sec": 0, 00:03:00.076 "w_mbytes_per_sec": 0 00:03:00.076 }, 00:03:00.076 "claimed": false, 00:03:00.076 "zoned": false, 00:03:00.076 "supported_io_types": { 00:03:00.076 "read": true, 00:03:00.076 "write": true, 00:03:00.076 "unmap": true, 00:03:00.076 "flush": true, 00:03:00.077 "reset": true, 00:03:00.077 "nvme_admin": false, 00:03:00.077 "nvme_io": false, 00:03:00.077 "nvme_io_md": false, 00:03:00.077 "write_zeroes": true, 00:03:00.077 "zcopy": true, 00:03:00.077 "get_zone_info": false, 00:03:00.077 "zone_management": false, 00:03:00.077 "zone_append": false, 00:03:00.077 "compare": false, 00:03:00.077 "compare_and_write": false, 00:03:00.077 "abort": true, 00:03:00.077 "seek_hole": false, 00:03:00.077 "seek_data": false, 00:03:00.077 "copy": true, 00:03:00.077 "nvme_iov_md": false 00:03:00.077 }, 00:03:00.077 "memory_domains": [ 00:03:00.077 { 00:03:00.077 "dma_device_id": "system", 00:03:00.077 "dma_device_type": 1 00:03:00.077 }, 00:03:00.077 { 00:03:00.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.077 "dma_device_type": 2 00:03:00.077 } 00:03:00.077 ], 00:03:00.077 "driver_specific": { 00:03:00.077 "passthru": { 00:03:00.077 "name": "Passthru0", 00:03:00.077 "base_bdev_name": "Malloc0" 00:03:00.077 } 00:03:00.077 } 00:03:00.077 } 00:03:00.077 ]' 00:03:00.077 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:00.077 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:00.077 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:00.077 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.077 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.077 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.077 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:00.077 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.077 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.335 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.335 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:00.335 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:00.335 16:14:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:00.335 00:03:00.335 real 0m0.244s 00:03:00.335 user 0m0.153s 00:03:00.335 sys 0m0.029s 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.335 16:14:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 ************************************ 00:03:00.335 END TEST rpc_integrity 00:03:00.335 ************************************ 00:03:00.335 16:14:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:00.335 16:14:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.335 16:14:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.335 16:14:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 ************************************ 00:03:00.335 START TEST rpc_plugins 00:03:00.335 ************************************ 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:00.335 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.335 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:00.335 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:00.335 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.335 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:00.335 { 00:03:00.335 "name": "Malloc1", 00:03:00.335 "aliases": [ 00:03:00.335 "b6764fd4-1499-49db-8b5e-1b6c44acc27f" 00:03:00.335 ], 00:03:00.335 "product_name": "Malloc disk", 00:03:00.335 "block_size": 4096, 00:03:00.335 "num_blocks": 256, 00:03:00.335 "uuid": "b6764fd4-1499-49db-8b5e-1b6c44acc27f", 00:03:00.335 "assigned_rate_limits": { 00:03:00.335 "rw_ios_per_sec": 0, 00:03:00.335 "rw_mbytes_per_sec": 0, 00:03:00.335 "r_mbytes_per_sec": 0, 00:03:00.336 "w_mbytes_per_sec": 0 00:03:00.336 }, 00:03:00.336 "claimed": false, 00:03:00.336 "zoned": false, 00:03:00.336 "supported_io_types": { 00:03:00.336 "read": true, 00:03:00.336 "write": true, 00:03:00.336 "unmap": true, 00:03:00.336 "flush": true, 00:03:00.336 "reset": true, 00:03:00.336 "nvme_admin": false, 00:03:00.336 "nvme_io": false, 00:03:00.336 "nvme_io_md": false, 00:03:00.336 "write_zeroes": true, 00:03:00.336 "zcopy": true, 00:03:00.336 "get_zone_info": false, 00:03:00.336 "zone_management": false, 00:03:00.336 "zone_append": false, 00:03:00.336 "compare": false, 00:03:00.336 "compare_and_write": false, 00:03:00.336 "abort": true, 00:03:00.336 "seek_hole": false, 00:03:00.336 "seek_data": false, 00:03:00.336 "copy": true, 00:03:00.336 "nvme_iov_md": false 00:03:00.336 }, 00:03:00.336 "memory_domains": [ 00:03:00.336 { 00:03:00.336 "dma_device_id": "system", 00:03:00.336 "dma_device_type": 1 00:03:00.336 }, 00:03:00.336 { 00:03:00.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.336 "dma_device_type": 2 00:03:00.336 } 00:03:00.336 ], 00:03:00.336 "driver_specific": {} 00:03:00.336 } 00:03:00.336 ]' 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:00.336 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:00.336 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:00.593 16:14:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:00.593 00:03:00.593 real 0m0.140s 00:03:00.593 user 0m0.087s 00:03:00.593 sys 0m0.015s 00:03:00.593 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.593 16:14:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:00.593 ************************************ 00:03:00.593 END TEST rpc_plugins 00:03:00.593 ************************************ 00:03:00.593 16:14:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:00.593 16:14:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.593 16:14:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.593 16:14:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.593 ************************************ 00:03:00.594 START TEST rpc_trace_cmd_test 00:03:00.594 ************************************ 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:00.594 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2618339", 00:03:00.594 "tpoint_group_mask": "0x8", 00:03:00.594 "iscsi_conn": { 00:03:00.594 "mask": "0x2", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "scsi": { 00:03:00.594 "mask": "0x4", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "bdev": { 00:03:00.594 "mask": "0x8", 00:03:00.594 "tpoint_mask": "0xffffffffffffffff" 00:03:00.594 }, 00:03:00.594 "nvmf_rdma": { 00:03:00.594 "mask": "0x10", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "nvmf_tcp": { 00:03:00.594 "mask": "0x20", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "ftl": { 00:03:00.594 "mask": "0x40", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "blobfs": { 00:03:00.594 "mask": "0x80", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "dsa": { 00:03:00.594 "mask": "0x200", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "thread": { 00:03:00.594 "mask": "0x400", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "nvme_pcie": { 00:03:00.594 "mask": "0x800", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "iaa": { 00:03:00.594 "mask": "0x1000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "nvme_tcp": { 00:03:00.594 "mask": "0x2000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "bdev_nvme": { 00:03:00.594 "mask": "0x4000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "sock": { 00:03:00.594 "mask": "0x8000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "blob": { 00:03:00.594 "mask": "0x10000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "bdev_raid": { 00:03:00.594 "mask": "0x20000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 }, 00:03:00.594 "scheduler": { 00:03:00.594 "mask": "0x40000", 00:03:00.594 "tpoint_mask": "0x0" 00:03:00.594 } 00:03:00.594 }' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:00.594 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:00.853 16:14:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:00.853 00:03:00.853 real 0m0.226s 00:03:00.853 user 0m0.196s 00:03:00.853 sys 0m0.023s 00:03:00.853 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.853 16:14:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:00.853 ************************************ 00:03:00.853 END TEST rpc_trace_cmd_test 00:03:00.853 ************************************ 00:03:00.853 16:14:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:00.853 16:14:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:00.853 16:14:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:00.853 16:14:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.853 16:14:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.853 16:14:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.853 ************************************ 00:03:00.853 START TEST rpc_daemon_integrity 00:03:00.853 ************************************ 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:00.853 { 00:03:00.853 "name": "Malloc2", 00:03:00.853 "aliases": [ 00:03:00.853 "244b0a25-781b-4477-8efe-df260d1934af" 00:03:00.853 ], 00:03:00.853 "product_name": "Malloc disk", 00:03:00.853 "block_size": 512, 00:03:00.853 "num_blocks": 16384, 00:03:00.853 "uuid": "244b0a25-781b-4477-8efe-df260d1934af", 00:03:00.853 "assigned_rate_limits": { 00:03:00.853 "rw_ios_per_sec": 0, 00:03:00.853 "rw_mbytes_per_sec": 0, 00:03:00.853 "r_mbytes_per_sec": 0, 00:03:00.853 "w_mbytes_per_sec": 0 00:03:00.853 }, 00:03:00.853 "claimed": false, 00:03:00.853 "zoned": false, 00:03:00.853 "supported_io_types": { 00:03:00.853 "read": true, 00:03:00.853 "write": true, 00:03:00.853 "unmap": true, 00:03:00.853 "flush": true, 00:03:00.853 "reset": true, 00:03:00.853 "nvme_admin": false, 00:03:00.853 "nvme_io": false, 00:03:00.853 "nvme_io_md": false, 00:03:00.853 "write_zeroes": true, 00:03:00.853 "zcopy": true, 00:03:00.853 "get_zone_info": false, 00:03:00.853 "zone_management": false, 00:03:00.853 "zone_append": false, 00:03:00.853 "compare": false, 00:03:00.853 "compare_and_write": false, 00:03:00.853 "abort": true, 00:03:00.853 "seek_hole": false, 00:03:00.853 "seek_data": false, 00:03:00.853 "copy": true, 00:03:00.853 "nvme_iov_md": false 00:03:00.853 }, 00:03:00.853 "memory_domains": [ 00:03:00.853 { 00:03:00.853 "dma_device_id": "system", 00:03:00.853 "dma_device_type": 1 00:03:00.853 }, 00:03:00.853 { 00:03:00.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.853 "dma_device_type": 2 00:03:00.853 } 00:03:00.853 ], 00:03:00.853 "driver_specific": {} 00:03:00.853 } 00:03:00.853 ]' 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:00.853 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.854 [2024-11-04 16:14:27.645241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:00.854 [2024-11-04 16:14:27.645269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:00.854 [2024-11-04 16:14:27.645280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x124de60 00:03:00.854 [2024-11-04 16:14:27.645286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:00.854 [2024-11-04 16:14:27.646366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:00.854 [2024-11-04 16:14:27.646387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:00.854 Passthru0 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:00.854 { 00:03:00.854 "name": "Malloc2", 00:03:00.854 "aliases": [ 00:03:00.854 "244b0a25-781b-4477-8efe-df260d1934af" 00:03:00.854 ], 00:03:00.854 "product_name": "Malloc disk", 00:03:00.854 "block_size": 512, 00:03:00.854 "num_blocks": 16384, 00:03:00.854 "uuid": "244b0a25-781b-4477-8efe-df260d1934af", 00:03:00.854 "assigned_rate_limits": { 00:03:00.854 "rw_ios_per_sec": 0, 00:03:00.854 "rw_mbytes_per_sec": 0, 00:03:00.854 "r_mbytes_per_sec": 0, 00:03:00.854 "w_mbytes_per_sec": 0 00:03:00.854 }, 00:03:00.854 "claimed": true, 00:03:00.854 "claim_type": "exclusive_write", 00:03:00.854 "zoned": false, 00:03:00.854 "supported_io_types": { 00:03:00.854 "read": true, 00:03:00.854 "write": true, 00:03:00.854 "unmap": true, 00:03:00.854 "flush": true, 00:03:00.854 "reset": true, 00:03:00.854 "nvme_admin": false, 00:03:00.854 "nvme_io": false, 00:03:00.854 "nvme_io_md": false, 00:03:00.854 "write_zeroes": true, 00:03:00.854 "zcopy": true, 00:03:00.854 "get_zone_info": false, 00:03:00.854 "zone_management": false, 00:03:00.854 "zone_append": false, 00:03:00.854 "compare": false, 00:03:00.854 "compare_and_write": false, 00:03:00.854 "abort": true, 00:03:00.854 "seek_hole": false, 00:03:00.854 "seek_data": false, 00:03:00.854 "copy": true, 00:03:00.854 "nvme_iov_md": false 00:03:00.854 }, 00:03:00.854 "memory_domains": [ 00:03:00.854 { 00:03:00.854 "dma_device_id": "system", 00:03:00.854 "dma_device_type": 1 00:03:00.854 }, 00:03:00.854 { 00:03:00.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.854 "dma_device_type": 2 00:03:00.854 } 00:03:00.854 ], 00:03:00.854 "driver_specific": {} 00:03:00.854 }, 00:03:00.854 { 00:03:00.854 "name": "Passthru0", 00:03:00.854 "aliases": [ 00:03:00.854 "8ecdfe81-0cbe-5f39-976a-73b77563a873" 00:03:00.854 ], 00:03:00.854 "product_name": "passthru", 00:03:00.854 "block_size": 512, 00:03:00.854 "num_blocks": 16384, 00:03:00.854 "uuid": "8ecdfe81-0cbe-5f39-976a-73b77563a873", 00:03:00.854 "assigned_rate_limits": { 00:03:00.854 "rw_ios_per_sec": 0, 00:03:00.854 "rw_mbytes_per_sec": 0, 00:03:00.854 "r_mbytes_per_sec": 0, 00:03:00.854 "w_mbytes_per_sec": 0 00:03:00.854 }, 00:03:00.854 "claimed": false, 00:03:00.854 "zoned": false, 00:03:00.854 "supported_io_types": { 00:03:00.854 "read": true, 00:03:00.854 "write": true, 00:03:00.854 "unmap": true, 00:03:00.854 "flush": true, 00:03:00.854 "reset": true, 00:03:00.854 "nvme_admin": false, 00:03:00.854 "nvme_io": false, 00:03:00.854 "nvme_io_md": false, 00:03:00.854 "write_zeroes": true, 00:03:00.854 "zcopy": true, 00:03:00.854 "get_zone_info": false, 00:03:00.854 "zone_management": false, 00:03:00.854 "zone_append": false, 00:03:00.854 "compare": false, 00:03:00.854 "compare_and_write": false, 00:03:00.854 "abort": true, 00:03:00.854 "seek_hole": false, 00:03:00.854 "seek_data": false, 00:03:00.854 "copy": true, 00:03:00.854 "nvme_iov_md": false 00:03:00.854 }, 00:03:00.854 "memory_domains": [ 00:03:00.854 { 00:03:00.854 "dma_device_id": "system", 00:03:00.854 "dma_device_type": 1 00:03:00.854 }, 00:03:00.854 { 00:03:00.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.854 "dma_device_type": 2 00:03:00.854 } 00:03:00.854 ], 00:03:00.854 "driver_specific": { 00:03:00.854 "passthru": { 00:03:00.854 "name": "Passthru0", 00:03:00.854 "base_bdev_name": "Malloc2" 00:03:00.854 } 00:03:00.854 } 00:03:00.854 } 00:03:00.854 ]' 00:03:00.854 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:01.114 00:03:01.114 real 0m0.247s 00:03:01.114 user 0m0.155s 00:03:01.114 sys 0m0.034s 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:01.114 16:14:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.114 ************************************ 00:03:01.114 END TEST rpc_daemon_integrity 00:03:01.114 ************************************ 00:03:01.114 16:14:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:01.114 16:14:27 rpc -- rpc/rpc.sh@84 -- # killprocess 2618339 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 2618339 ']' 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@958 -- # kill -0 2618339 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@959 -- # uname 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2618339 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2618339' 00:03:01.114 killing process with pid 2618339 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@973 -- # kill 2618339 00:03:01.114 16:14:27 rpc -- common/autotest_common.sh@978 -- # wait 2618339 00:03:01.373 00:03:01.373 real 0m2.017s 00:03:01.373 user 0m2.536s 00:03:01.373 sys 0m0.688s 00:03:01.373 16:14:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:01.373 16:14:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.373 ************************************ 00:03:01.373 END TEST rpc 00:03:01.373 ************************************ 00:03:01.373 16:14:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:01.373 16:14:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:01.373 16:14:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:01.373 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:03:01.631 ************************************ 00:03:01.631 START TEST skip_rpc 00:03:01.631 ************************************ 00:03:01.631 16:14:28 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:01.631 * Looking for test storage... 00:03:01.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:01.631 16:14:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:01.631 16:14:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:01.631 16:14:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:01.631 16:14:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:01.631 16:14:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:01.632 16:14:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:01.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.632 --rc genhtml_branch_coverage=1 00:03:01.632 --rc genhtml_function_coverage=1 00:03:01.632 --rc genhtml_legend=1 00:03:01.632 --rc geninfo_all_blocks=1 00:03:01.632 --rc geninfo_unexecuted_blocks=1 00:03:01.632 00:03:01.632 ' 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:01.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.632 --rc genhtml_branch_coverage=1 00:03:01.632 --rc genhtml_function_coverage=1 00:03:01.632 --rc genhtml_legend=1 00:03:01.632 --rc geninfo_all_blocks=1 00:03:01.632 --rc geninfo_unexecuted_blocks=1 00:03:01.632 00:03:01.632 ' 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:01.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.632 --rc genhtml_branch_coverage=1 00:03:01.632 --rc genhtml_function_coverage=1 00:03:01.632 --rc genhtml_legend=1 00:03:01.632 --rc geninfo_all_blocks=1 00:03:01.632 --rc geninfo_unexecuted_blocks=1 00:03:01.632 00:03:01.632 ' 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:01.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.632 --rc genhtml_branch_coverage=1 00:03:01.632 --rc genhtml_function_coverage=1 00:03:01.632 --rc genhtml_legend=1 00:03:01.632 --rc geninfo_all_blocks=1 00:03:01.632 --rc geninfo_unexecuted_blocks=1 00:03:01.632 00:03:01.632 ' 00:03:01.632 16:14:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:01.632 16:14:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:01.632 16:14:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:01.632 16:14:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.632 ************************************ 00:03:01.632 START TEST skip_rpc 00:03:01.632 ************************************ 00:03:01.632 16:14:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:01.632 16:14:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2618876 00:03:01.632 16:14:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:01.632 16:14:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:01.632 16:14:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:01.891 [2024-11-04 16:14:28.493048] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:01.891 [2024-11-04 16:14:28.493088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618876 ] 00:03:01.891 [2024-11-04 16:14:28.557189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:01.891 [2024-11-04 16:14:28.597254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2618876 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2618876 ']' 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2618876 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2618876 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2618876' 00:03:07.169 killing process with pid 2618876 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2618876 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2618876 00:03:07.169 00:03:07.169 real 0m5.367s 00:03:07.169 user 0m5.129s 00:03:07.169 sys 0m0.274s 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:07.169 16:14:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 ************************************ 00:03:07.169 END TEST skip_rpc 00:03:07.169 ************************************ 00:03:07.169 16:14:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:07.169 16:14:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:07.169 16:14:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:07.169 16:14:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 ************************************ 00:03:07.169 START TEST skip_rpc_with_json 00:03:07.169 ************************************ 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2619842 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2619842 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2619842 ']' 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:07.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:07.169 16:14:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 [2024-11-04 16:14:33.925592] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:07.169 [2024-11-04 16:14:33.925639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619842 ] 00:03:07.169 [2024-11-04 16:14:33.987154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:07.428 [2024-11-04 16:14:34.029404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:07.428 [2024-11-04 16:14:34.233935] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:07.428 request: 00:03:07.428 { 00:03:07.428 "trtype": "tcp", 00:03:07.428 "method": "nvmf_get_transports", 00:03:07.428 "req_id": 1 00:03:07.428 } 00:03:07.428 Got JSON-RPC error response 00:03:07.428 response: 00:03:07.428 { 00:03:07.428 "code": -19, 00:03:07.428 "message": "No such device" 00:03:07.428 } 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:07.428 [2024-11-04 16:14:34.242028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:07.428 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:07.688 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:07.688 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:07.688 { 00:03:07.688 "subsystems": [ 00:03:07.688 { 00:03:07.688 "subsystem": "fsdev", 00:03:07.688 "config": [ 00:03:07.688 { 00:03:07.688 "method": "fsdev_set_opts", 00:03:07.688 "params": { 00:03:07.688 "fsdev_io_pool_size": 65535, 00:03:07.688 "fsdev_io_cache_size": 256 00:03:07.688 } 00:03:07.688 } 00:03:07.688 ] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "vfio_user_target", 00:03:07.688 "config": null 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "keyring", 00:03:07.688 "config": [] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "iobuf", 00:03:07.688 "config": [ 00:03:07.688 { 00:03:07.688 "method": "iobuf_set_options", 00:03:07.688 "params": { 00:03:07.688 "small_pool_count": 8192, 00:03:07.688 "large_pool_count": 1024, 00:03:07.688 "small_bufsize": 8192, 00:03:07.688 "large_bufsize": 135168, 00:03:07.688 "enable_numa": false 00:03:07.688 } 00:03:07.688 } 00:03:07.688 ] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "sock", 00:03:07.688 "config": [ 00:03:07.688 { 00:03:07.688 "method": "sock_set_default_impl", 00:03:07.688 "params": { 00:03:07.688 "impl_name": "posix" 00:03:07.688 } 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "method": "sock_impl_set_options", 00:03:07.688 "params": { 00:03:07.688 "impl_name": "ssl", 00:03:07.688 "recv_buf_size": 4096, 00:03:07.688 "send_buf_size": 4096, 00:03:07.688 "enable_recv_pipe": true, 00:03:07.688 "enable_quickack": false, 00:03:07.688 "enable_placement_id": 0, 00:03:07.688 "enable_zerocopy_send_server": true, 00:03:07.688 "enable_zerocopy_send_client": false, 00:03:07.688 "zerocopy_threshold": 0, 00:03:07.688 "tls_version": 0, 00:03:07.688 "enable_ktls": false 00:03:07.688 } 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "method": "sock_impl_set_options", 00:03:07.688 "params": { 00:03:07.688 "impl_name": "posix", 00:03:07.688 "recv_buf_size": 2097152, 00:03:07.688 "send_buf_size": 2097152, 00:03:07.688 "enable_recv_pipe": true, 00:03:07.688 "enable_quickack": false, 00:03:07.688 "enable_placement_id": 0, 00:03:07.688 "enable_zerocopy_send_server": true, 00:03:07.688 "enable_zerocopy_send_client": false, 00:03:07.688 "zerocopy_threshold": 0, 00:03:07.688 "tls_version": 0, 00:03:07.688 "enable_ktls": false 00:03:07.688 } 00:03:07.688 } 00:03:07.688 ] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "vmd", 00:03:07.688 "config": [] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "accel", 00:03:07.688 "config": [ 00:03:07.688 { 00:03:07.688 "method": "accel_set_options", 00:03:07.688 "params": { 00:03:07.688 "small_cache_size": 128, 00:03:07.688 "large_cache_size": 16, 00:03:07.688 "task_count": 2048, 00:03:07.688 "sequence_count": 2048, 00:03:07.688 "buf_count": 2048 00:03:07.688 } 00:03:07.688 } 00:03:07.688 ] 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "subsystem": "bdev", 00:03:07.688 "config": [ 00:03:07.688 { 00:03:07.688 "method": "bdev_set_options", 00:03:07.688 "params": { 00:03:07.688 "bdev_io_pool_size": 65535, 00:03:07.688 "bdev_io_cache_size": 256, 00:03:07.688 "bdev_auto_examine": true, 00:03:07.688 "iobuf_small_cache_size": 128, 00:03:07.688 "iobuf_large_cache_size": 16 00:03:07.688 } 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "method": "bdev_raid_set_options", 00:03:07.688 "params": { 00:03:07.688 "process_window_size_kb": 1024, 00:03:07.688 "process_max_bandwidth_mb_sec": 0 00:03:07.688 } 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "method": "bdev_iscsi_set_options", 00:03:07.688 "params": { 00:03:07.688 "timeout_sec": 30 00:03:07.688 } 00:03:07.688 }, 00:03:07.688 { 00:03:07.688 "method": "bdev_nvme_set_options", 00:03:07.688 "params": { 00:03:07.688 "action_on_timeout": "none", 00:03:07.688 "timeout_us": 0, 00:03:07.688 "timeout_admin_us": 0, 00:03:07.688 "keep_alive_timeout_ms": 10000, 00:03:07.688 "arbitration_burst": 0, 00:03:07.688 "low_priority_weight": 0, 00:03:07.688 "medium_priority_weight": 0, 00:03:07.688 "high_priority_weight": 0, 00:03:07.688 "nvme_adminq_poll_period_us": 10000, 00:03:07.688 "nvme_ioq_poll_period_us": 0, 00:03:07.688 "io_queue_requests": 0, 00:03:07.688 "delay_cmd_submit": true, 00:03:07.688 "transport_retry_count": 4, 00:03:07.688 "bdev_retry_count": 3, 00:03:07.688 "transport_ack_timeout": 0, 00:03:07.688 "ctrlr_loss_timeout_sec": 0, 00:03:07.688 "reconnect_delay_sec": 0, 00:03:07.688 "fast_io_fail_timeout_sec": 0, 00:03:07.688 "disable_auto_failback": false, 00:03:07.688 "generate_uuids": false, 00:03:07.688 "transport_tos": 0, 00:03:07.688 "nvme_error_stat": false, 00:03:07.688 "rdma_srq_size": 0, 00:03:07.689 "io_path_stat": false, 00:03:07.689 "allow_accel_sequence": false, 00:03:07.689 "rdma_max_cq_size": 0, 00:03:07.689 "rdma_cm_event_timeout_ms": 0, 00:03:07.689 "dhchap_digests": [ 00:03:07.689 "sha256", 00:03:07.689 "sha384", 00:03:07.689 "sha512" 00:03:07.689 ], 00:03:07.689 "dhchap_dhgroups": [ 00:03:07.689 "null", 00:03:07.689 "ffdhe2048", 00:03:07.689 "ffdhe3072", 00:03:07.689 "ffdhe4096", 00:03:07.689 "ffdhe6144", 00:03:07.689 "ffdhe8192" 00:03:07.689 ] 00:03:07.689 } 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "method": "bdev_nvme_set_hotplug", 00:03:07.689 "params": { 00:03:07.689 "period_us": 100000, 00:03:07.689 "enable": false 00:03:07.689 } 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "method": "bdev_wait_for_examine" 00:03:07.689 } 00:03:07.689 ] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "scsi", 00:03:07.689 "config": null 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "scheduler", 00:03:07.689 "config": [ 00:03:07.689 { 00:03:07.689 "method": "framework_set_scheduler", 00:03:07.689 "params": { 00:03:07.689 "name": "static" 00:03:07.689 } 00:03:07.689 } 00:03:07.689 ] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "vhost_scsi", 00:03:07.689 "config": [] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "vhost_blk", 00:03:07.689 "config": [] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "ublk", 00:03:07.689 "config": [] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "nbd", 00:03:07.689 "config": [] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "nvmf", 00:03:07.689 "config": [ 00:03:07.689 { 00:03:07.689 "method": "nvmf_set_config", 00:03:07.689 "params": { 00:03:07.689 "discovery_filter": "match_any", 00:03:07.689 "admin_cmd_passthru": { 00:03:07.689 "identify_ctrlr": false 00:03:07.689 }, 00:03:07.689 "dhchap_digests": [ 00:03:07.689 "sha256", 00:03:07.689 "sha384", 00:03:07.689 "sha512" 00:03:07.689 ], 00:03:07.689 "dhchap_dhgroups": [ 00:03:07.689 "null", 00:03:07.689 "ffdhe2048", 00:03:07.689 "ffdhe3072", 00:03:07.689 "ffdhe4096", 00:03:07.689 "ffdhe6144", 00:03:07.689 "ffdhe8192" 00:03:07.689 ] 00:03:07.689 } 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "method": "nvmf_set_max_subsystems", 00:03:07.689 "params": { 00:03:07.689 "max_subsystems": 1024 00:03:07.689 } 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "method": "nvmf_set_crdt", 00:03:07.689 "params": { 00:03:07.689 "crdt1": 0, 00:03:07.689 "crdt2": 0, 00:03:07.689 "crdt3": 0 00:03:07.689 } 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "method": "nvmf_create_transport", 00:03:07.689 "params": { 00:03:07.689 "trtype": "TCP", 00:03:07.689 "max_queue_depth": 128, 00:03:07.689 "max_io_qpairs_per_ctrlr": 127, 00:03:07.689 "in_capsule_data_size": 4096, 00:03:07.689 "max_io_size": 131072, 00:03:07.689 "io_unit_size": 131072, 00:03:07.689 "max_aq_depth": 128, 00:03:07.689 "num_shared_buffers": 511, 00:03:07.689 "buf_cache_size": 4294967295, 00:03:07.689 "dif_insert_or_strip": false, 00:03:07.689 "zcopy": false, 00:03:07.689 "c2h_success": true, 00:03:07.689 "sock_priority": 0, 00:03:07.689 "abort_timeout_sec": 1, 00:03:07.689 "ack_timeout": 0, 00:03:07.689 "data_wr_pool_size": 0 00:03:07.689 } 00:03:07.689 } 00:03:07.689 ] 00:03:07.689 }, 00:03:07.689 { 00:03:07.689 "subsystem": "iscsi", 00:03:07.689 "config": [ 00:03:07.689 { 00:03:07.689 "method": "iscsi_set_options", 00:03:07.689 "params": { 00:03:07.689 "node_base": "iqn.2016-06.io.spdk", 00:03:07.689 "max_sessions": 128, 00:03:07.689 "max_connections_per_session": 2, 00:03:07.689 "max_queue_depth": 64, 00:03:07.689 "default_time2wait": 2, 00:03:07.689 "default_time2retain": 20, 00:03:07.689 "first_burst_length": 8192, 00:03:07.689 "immediate_data": true, 00:03:07.689 "allow_duplicated_isid": false, 00:03:07.689 "error_recovery_level": 0, 00:03:07.689 "nop_timeout": 60, 00:03:07.689 "nop_in_interval": 30, 00:03:07.689 "disable_chap": false, 00:03:07.689 "require_chap": false, 00:03:07.689 "mutual_chap": false, 00:03:07.689 "chap_group": 0, 00:03:07.689 "max_large_datain_per_connection": 64, 00:03:07.689 "max_r2t_per_connection": 4, 00:03:07.689 "pdu_pool_size": 36864, 00:03:07.689 "immediate_data_pool_size": 16384, 00:03:07.689 "data_out_pool_size": 2048 00:03:07.689 } 00:03:07.689 } 00:03:07.689 ] 00:03:07.689 } 00:03:07.689 ] 00:03:07.689 } 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2619842 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2619842 ']' 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2619842 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619842 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619842' 00:03:07.689 killing process with pid 2619842 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2619842 00:03:07.689 16:14:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2619842 00:03:07.948 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2619943 00:03:07.948 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:07.948 16:14:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2619943 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2619943 ']' 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2619943 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619943 00:03:13.257 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:13.258 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:13.258 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619943' 00:03:13.258 killing process with pid 2619943 00:03:13.258 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2619943 00:03:13.258 16:14:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2619943 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:13.565 00:03:13.565 real 0m6.235s 00:03:13.565 user 0m5.953s 00:03:13.565 sys 0m0.567s 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:13.565 ************************************ 00:03:13.565 END TEST skip_rpc_with_json 00:03:13.565 ************************************ 00:03:13.565 16:14:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:13.565 ************************************ 00:03:13.565 START TEST skip_rpc_with_delay 00:03:13.565 ************************************ 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:13.565 [2024-11-04 16:14:40.230866] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:13.565 00:03:13.565 real 0m0.070s 00:03:13.565 user 0m0.048s 00:03:13.565 sys 0m0.022s 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.565 16:14:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:13.565 ************************************ 00:03:13.565 END TEST skip_rpc_with_delay 00:03:13.565 ************************************ 00:03:13.565 16:14:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:13.565 16:14:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:13.565 16:14:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.565 16:14:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:13.565 ************************************ 00:03:13.565 START TEST exit_on_failed_rpc_init 00:03:13.565 ************************************ 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2620920 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2620920 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2620920 ']' 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:13.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:13.565 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:13.565 [2024-11-04 16:14:40.364281] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:13.565 [2024-11-04 16:14:40.364322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620920 ] 00:03:13.823 [2024-11-04 16:14:40.428268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.823 [2024-11-04 16:14:40.470787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.082 [2024-11-04 16:14:40.736134] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:14.082 [2024-11-04 16:14:40.736179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621076 ] 00:03:14.082 [2024-11-04 16:14:40.796955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:14.082 [2024-11-04 16:14:40.837411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:14.082 [2024-11-04 16:14:40.837464] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:14.082 [2024-11-04 16:14:40.837474] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:14.082 [2024-11-04 16:14:40.837482] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2620920 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2620920 ']' 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2620920 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:14.082 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2620920 00:03:14.340 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:14.340 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:14.340 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2620920' 00:03:14.340 killing process with pid 2620920 00:03:14.340 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2620920 00:03:14.340 16:14:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2620920 00:03:14.599 00:03:14.599 real 0m0.909s 00:03:14.599 user 0m0.993s 00:03:14.599 sys 0m0.344s 00:03:14.599 16:14:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.599 16:14:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:14.599 ************************************ 00:03:14.599 END TEST exit_on_failed_rpc_init 00:03:14.599 ************************************ 00:03:14.599 16:14:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:14.599 00:03:14.599 real 0m13.031s 00:03:14.599 user 0m12.314s 00:03:14.599 sys 0m1.494s 00:03:14.599 16:14:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.599 16:14:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.599 ************************************ 00:03:14.599 END TEST skip_rpc 00:03:14.599 ************************************ 00:03:14.599 16:14:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:14.599 16:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.599 16:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.599 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:14.599 ************************************ 00:03:14.599 START TEST rpc_client 00:03:14.599 ************************************ 00:03:14.599 16:14:41 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:14.599 * Looking for test storage... 00:03:14.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:14.599 16:14:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:14.599 16:14:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:14.599 16:14:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:14.858 16:14:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.858 16:14:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:14.858 16:14:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.858 16:14:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:14.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.858 --rc genhtml_branch_coverage=1 00:03:14.858 --rc genhtml_function_coverage=1 00:03:14.858 --rc genhtml_legend=1 00:03:14.858 --rc geninfo_all_blocks=1 00:03:14.858 --rc geninfo_unexecuted_blocks=1 00:03:14.858 00:03:14.858 ' 00:03:14.858 16:14:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:14.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.858 --rc genhtml_branch_coverage=1 00:03:14.858 --rc genhtml_function_coverage=1 00:03:14.858 --rc genhtml_legend=1 00:03:14.858 --rc geninfo_all_blocks=1 00:03:14.858 --rc geninfo_unexecuted_blocks=1 00:03:14.858 00:03:14.858 ' 00:03:14.858 16:14:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.859 --rc genhtml_branch_coverage=1 00:03:14.859 --rc genhtml_function_coverage=1 00:03:14.859 --rc genhtml_legend=1 00:03:14.859 --rc geninfo_all_blocks=1 00:03:14.859 --rc geninfo_unexecuted_blocks=1 00:03:14.859 00:03:14.859 ' 00:03:14.859 16:14:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.859 --rc genhtml_branch_coverage=1 00:03:14.859 --rc genhtml_function_coverage=1 00:03:14.859 --rc genhtml_legend=1 00:03:14.859 --rc geninfo_all_blocks=1 00:03:14.859 --rc geninfo_unexecuted_blocks=1 00:03:14.859 00:03:14.859 ' 00:03:14.859 16:14:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:14.859 OK 00:03:14.859 16:14:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:14.859 00:03:14.859 real 0m0.193s 00:03:14.859 user 0m0.107s 00:03:14.859 sys 0m0.098s 00:03:14.859 16:14:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.859 16:14:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:14.859 ************************************ 00:03:14.859 END TEST rpc_client 00:03:14.859 ************************************ 00:03:14.859 16:14:41 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:14.859 16:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.859 16:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.859 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:14.859 ************************************ 00:03:14.859 START TEST json_config 00:03:14.859 ************************************ 00:03:14.859 16:14:41 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:14.859 16:14:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:14.859 16:14:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:14.859 16:14:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:15.117 16:14:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.117 16:14:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.117 16:14:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.117 16:14:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.117 16:14:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.117 16:14:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.117 16:14:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:15.117 16:14:41 json_config -- scripts/common.sh@345 -- # : 1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.117 16:14:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.117 16:14:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@353 -- # local d=1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.117 16:14:41 json_config -- scripts/common.sh@355 -- # echo 1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.117 16:14:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@353 -- # local d=2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.117 16:14:41 json_config -- scripts/common.sh@355 -- # echo 2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.117 16:14:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.117 16:14:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.117 16:14:41 json_config -- scripts/common.sh@368 -- # return 0 00:03:15.117 16:14:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.117 16:14:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:15.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.118 --rc genhtml_branch_coverage=1 00:03:15.118 --rc genhtml_function_coverage=1 00:03:15.118 --rc genhtml_legend=1 00:03:15.118 --rc geninfo_all_blocks=1 00:03:15.118 --rc geninfo_unexecuted_blocks=1 00:03:15.118 00:03:15.118 ' 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:15.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.118 --rc genhtml_branch_coverage=1 00:03:15.118 --rc genhtml_function_coverage=1 00:03:15.118 --rc genhtml_legend=1 00:03:15.118 --rc geninfo_all_blocks=1 00:03:15.118 --rc geninfo_unexecuted_blocks=1 00:03:15.118 00:03:15.118 ' 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:15.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.118 --rc genhtml_branch_coverage=1 00:03:15.118 --rc genhtml_function_coverage=1 00:03:15.118 --rc genhtml_legend=1 00:03:15.118 --rc geninfo_all_blocks=1 00:03:15.118 --rc geninfo_unexecuted_blocks=1 00:03:15.118 00:03:15.118 ' 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:15.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.118 --rc genhtml_branch_coverage=1 00:03:15.118 --rc genhtml_function_coverage=1 00:03:15.118 --rc genhtml_legend=1 00:03:15.118 --rc geninfo_all_blocks=1 00:03:15.118 --rc geninfo_unexecuted_blocks=1 00:03:15.118 00:03:15.118 ' 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:15.118 16:14:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:15.118 16:14:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:15.118 16:14:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.118 16:14:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.118 16:14:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.118 16:14:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.118 16:14:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.118 16:14:41 json_config -- paths/export.sh@5 -- # export PATH 00:03:15.118 16:14:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@51 -- # : 0 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:15.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:15.118 16:14:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:15.118 INFO: JSON configuration test init 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:15.118 16:14:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:15.118 16:14:41 json_config -- json_config/common.sh@9 -- # local app=target 00:03:15.118 16:14:41 json_config -- json_config/common.sh@10 -- # shift 00:03:15.118 16:14:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:15.118 16:14:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:15.118 16:14:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:15.118 16:14:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:15.118 16:14:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:15.118 16:14:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2621286 00:03:15.118 16:14:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:15.118 Waiting for target to run... 00:03:15.118 16:14:41 json_config -- json_config/common.sh@25 -- # waitforlisten 2621286 /var/tmp/spdk_tgt.sock 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 2621286 ']' 00:03:15.118 16:14:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:15.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:15.118 16:14:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:15.118 [2024-11-04 16:14:41.840682] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:15.118 [2024-11-04 16:14:41.840731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621286 ] 00:03:15.685 [2024-11-04 16:14:42.276882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.685 [2024-11-04 16:14:42.334440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:15.943 16:14:42 json_config -- json_config/common.sh@26 -- # echo '' 00:03:15.943 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:15.943 16:14:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:15.943 16:14:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:15.943 16:14:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:19.278 16:14:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.278 16:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:19.278 16:14:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@54 -- # sort 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:19.278 16:14:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:19.278 16:14:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:19.279 16:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:19.279 16:14:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:19.279 16:14:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.279 16:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:19.279 16:14:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:19.279 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:19.537 MallocForNvmf0 00:03:19.537 16:14:46 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:19.537 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:19.795 MallocForNvmf1 00:03:19.795 16:14:46 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:19.795 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:19.795 [2024-11-04 16:14:46.564521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:19.795 16:14:46 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:19.795 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:20.053 16:14:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:20.053 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:20.311 16:14:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:20.311 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:20.311 16:14:47 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:20.311 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:20.568 [2024-11-04 16:14:47.302871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:20.568 16:14:47 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:20.568 16:14:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:20.568 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.568 16:14:47 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:20.568 16:14:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:20.568 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.568 16:14:47 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:20.568 16:14:47 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:20.568 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:20.827 MallocBdevForConfigChangeCheck 00:03:20.827 16:14:47 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:20.827 16:14:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:20.827 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.827 16:14:47 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:20.827 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:21.392 16:14:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:21.392 INFO: shutting down applications... 00:03:21.392 16:14:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:21.392 16:14:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:21.392 16:14:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:21.392 16:14:47 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:23.291 Calling clear_iscsi_subsystem 00:03:23.291 Calling clear_nvmf_subsystem 00:03:23.291 Calling clear_nbd_subsystem 00:03:23.291 Calling clear_ublk_subsystem 00:03:23.291 Calling clear_vhost_blk_subsystem 00:03:23.291 Calling clear_vhost_scsi_subsystem 00:03:23.291 Calling clear_bdev_subsystem 00:03:23.291 16:14:50 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:23.291 16:14:50 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:23.548 16:14:50 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:23.549 16:14:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:23.549 16:14:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:23.549 16:14:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:23.806 16:14:50 json_config -- json_config/json_config.sh@352 -- # break 00:03:23.806 16:14:50 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:23.806 16:14:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:23.806 16:14:50 json_config -- json_config/common.sh@31 -- # local app=target 00:03:23.806 16:14:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:23.806 16:14:50 json_config -- json_config/common.sh@35 -- # [[ -n 2621286 ]] 00:03:23.806 16:14:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2621286 00:03:23.806 16:14:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:23.806 16:14:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:23.806 16:14:50 json_config -- json_config/common.sh@41 -- # kill -0 2621286 00:03:23.806 16:14:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:24.373 16:14:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:24.373 16:14:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:24.373 16:14:50 json_config -- json_config/common.sh@41 -- # kill -0 2621286 00:03:24.373 16:14:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:24.373 16:14:50 json_config -- json_config/common.sh@43 -- # break 00:03:24.373 16:14:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:24.373 16:14:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:24.373 SPDK target shutdown done 00:03:24.373 16:14:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:24.373 INFO: relaunching applications... 00:03:24.373 16:14:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:24.373 16:14:50 json_config -- json_config/common.sh@9 -- # local app=target 00:03:24.373 16:14:50 json_config -- json_config/common.sh@10 -- # shift 00:03:24.373 16:14:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:24.373 16:14:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:24.373 16:14:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:24.373 16:14:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:24.373 16:14:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:24.373 16:14:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2623019 00:03:24.373 16:14:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:24.373 Waiting for target to run... 00:03:24.373 16:14:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:24.373 16:14:50 json_config -- json_config/common.sh@25 -- # waitforlisten 2623019 /var/tmp/spdk_tgt.sock 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 2623019 ']' 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:24.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.373 16:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:24.373 [2024-11-04 16:14:51.028256] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:24.373 [2024-11-04 16:14:51.028314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623019 ] 00:03:24.938 [2024-11-04 16:14:51.472866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.938 [2024-11-04 16:14:51.525305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.218 [2024-11-04 16:14:54.562159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:28.218 [2024-11-04 16:14:54.594506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:28.475 16:14:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:28.475 16:14:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:28.475 16:14:55 json_config -- json_config/common.sh@26 -- # echo '' 00:03:28.475 00:03:28.475 16:14:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:28.475 16:14:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:28.475 INFO: Checking if target configuration is the same... 00:03:28.475 16:14:55 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.475 16:14:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:28.475 16:14:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:28.475 + '[' 2 -ne 2 ']' 00:03:28.475 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:28.475 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:28.475 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.475 +++ basename /dev/fd/62 00:03:28.475 ++ mktemp /tmp/62.XXX 00:03:28.475 + tmp_file_1=/tmp/62.0wn 00:03:28.475 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.476 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:28.476 + tmp_file_2=/tmp/spdk_tgt_config.json.rYo 00:03:28.476 + ret=0 00:03:28.476 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.041 + diff -u /tmp/62.0wn /tmp/spdk_tgt_config.json.rYo 00:03:29.041 + echo 'INFO: JSON config files are the same' 00:03:29.041 INFO: JSON config files are the same 00:03:29.041 + rm /tmp/62.0wn /tmp/spdk_tgt_config.json.rYo 00:03:29.041 + exit 0 00:03:29.041 16:14:55 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:29.041 16:14:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:29.041 INFO: changing configuration and checking if this can be detected... 00:03:29.041 16:14:55 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:29.041 16:14:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:29.041 16:14:55 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:29.041 16:14:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:29.041 16:14:55 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:29.041 + '[' 2 -ne 2 ']' 00:03:29.041 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:29.041 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:29.041 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:29.041 +++ basename /dev/fd/62 00:03:29.041 ++ mktemp /tmp/62.XXX 00:03:29.041 + tmp_file_1=/tmp/62.u0x 00:03:29.041 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:29.041 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:29.041 + tmp_file_2=/tmp/spdk_tgt_config.json.2Id 00:03:29.041 + ret=0 00:03:29.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.607 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.607 + diff -u /tmp/62.u0x /tmp/spdk_tgt_config.json.2Id 00:03:29.607 + ret=1 00:03:29.607 + echo '=== Start of file: /tmp/62.u0x ===' 00:03:29.607 + cat /tmp/62.u0x 00:03:29.607 + echo '=== End of file: /tmp/62.u0x ===' 00:03:29.607 + echo '' 00:03:29.607 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2Id ===' 00:03:29.607 + cat /tmp/spdk_tgt_config.json.2Id 00:03:29.607 + echo '=== End of file: /tmp/spdk_tgt_config.json.2Id ===' 00:03:29.607 + echo '' 00:03:29.607 + rm /tmp/62.u0x /tmp/spdk_tgt_config.json.2Id 00:03:29.607 + exit 1 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:29.607 INFO: configuration change detected. 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 2623019 ]] 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.607 16:14:56 json_config -- json_config/json_config.sh@330 -- # killprocess 2623019 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 2623019 ']' 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@958 -- # kill -0 2623019 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@959 -- # uname 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623019 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623019' 00:03:29.607 killing process with pid 2623019 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@973 -- # kill 2623019 00:03:29.607 16:14:56 json_config -- common/autotest_common.sh@978 -- # wait 2623019 00:03:31.513 16:14:58 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:31.513 16:14:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:31.513 16:14:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:31.513 16:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.513 16:14:58 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:31.513 16:14:58 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:31.513 INFO: Success 00:03:31.513 00:03:31.513 real 0m16.747s 00:03:31.513 user 0m17.010s 00:03:31.513 sys 0m2.702s 00:03:31.513 16:14:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.772 16:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.772 ************************************ 00:03:31.772 END TEST json_config 00:03:31.772 ************************************ 00:03:31.772 16:14:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:31.772 16:14:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.772 16:14:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.772 16:14:58 -- common/autotest_common.sh@10 -- # set +x 00:03:31.772 ************************************ 00:03:31.772 START TEST json_config_extra_key 00:03:31.772 ************************************ 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.772 --rc genhtml_branch_coverage=1 00:03:31.772 --rc genhtml_function_coverage=1 00:03:31.772 --rc genhtml_legend=1 00:03:31.772 --rc geninfo_all_blocks=1 00:03:31.772 --rc geninfo_unexecuted_blocks=1 00:03:31.772 00:03:31.772 ' 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.772 --rc genhtml_branch_coverage=1 00:03:31.772 --rc genhtml_function_coverage=1 00:03:31.772 --rc genhtml_legend=1 00:03:31.772 --rc geninfo_all_blocks=1 00:03:31.772 --rc geninfo_unexecuted_blocks=1 00:03:31.772 00:03:31.772 ' 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.772 --rc genhtml_branch_coverage=1 00:03:31.772 --rc genhtml_function_coverage=1 00:03:31.772 --rc genhtml_legend=1 00:03:31.772 --rc geninfo_all_blocks=1 00:03:31.772 --rc geninfo_unexecuted_blocks=1 00:03:31.772 00:03:31.772 ' 00:03:31.772 16:14:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.772 --rc genhtml_branch_coverage=1 00:03:31.772 --rc genhtml_function_coverage=1 00:03:31.772 --rc genhtml_legend=1 00:03:31.772 --rc geninfo_all_blocks=1 00:03:31.772 --rc geninfo_unexecuted_blocks=1 00:03:31.772 00:03:31.772 ' 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.772 16:14:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.772 16:14:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.772 16:14:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.772 16:14:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.772 16:14:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:31.772 16:14:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.772 16:14:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:31.772 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:31.773 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:31.773 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:31.773 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:31.773 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:31.773 INFO: launching applications... 00:03:31.773 16:14:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2624517 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:31.773 Waiting for target to run... 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2624517 /var/tmp/spdk_tgt.sock 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2624517 ']' 00:03:31.773 16:14:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:31.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.773 16:14:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:32.031 [2024-11-04 16:14:58.638155] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:32.031 [2024-11-04 16:14:58.638202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624517 ] 00:03:32.288 [2024-11-04 16:14:58.907312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.288 [2024-11-04 16:14:58.942418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.853 16:14:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:32.853 16:14:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:32.853 00:03:32.853 16:14:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:32.853 INFO: shutting down applications... 00:03:32.853 16:14:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2624517 ]] 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2624517 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2624517 00:03:32.853 16:14:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2624517 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:33.419 16:14:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:33.419 SPDK target shutdown done 00:03:33.419 16:14:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:33.419 Success 00:03:33.419 00:03:33.419 real 0m1.550s 00:03:33.419 user 0m1.319s 00:03:33.419 sys 0m0.395s 00:03:33.419 16:14:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.419 16:14:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:33.419 ************************************ 00:03:33.419 END TEST json_config_extra_key 00:03:33.419 ************************************ 00:03:33.419 16:14:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:33.419 16:14:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.419 16:14:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.419 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:03:33.419 ************************************ 00:03:33.419 START TEST alias_rpc 00:03:33.419 ************************************ 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:33.419 * Looking for test storage... 00:03:33.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.419 16:15:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.419 16:15:00 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.419 --rc genhtml_branch_coverage=1 00:03:33.419 --rc genhtml_function_coverage=1 00:03:33.419 --rc genhtml_legend=1 00:03:33.419 --rc geninfo_all_blocks=1 00:03:33.420 --rc geninfo_unexecuted_blocks=1 00:03:33.420 00:03:33.420 ' 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.420 --rc genhtml_branch_coverage=1 00:03:33.420 --rc genhtml_function_coverage=1 00:03:33.420 --rc genhtml_legend=1 00:03:33.420 --rc geninfo_all_blocks=1 00:03:33.420 --rc geninfo_unexecuted_blocks=1 00:03:33.420 00:03:33.420 ' 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.420 --rc genhtml_branch_coverage=1 00:03:33.420 --rc genhtml_function_coverage=1 00:03:33.420 --rc genhtml_legend=1 00:03:33.420 --rc geninfo_all_blocks=1 00:03:33.420 --rc geninfo_unexecuted_blocks=1 00:03:33.420 00:03:33.420 ' 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.420 --rc genhtml_branch_coverage=1 00:03:33.420 --rc genhtml_function_coverage=1 00:03:33.420 --rc genhtml_legend=1 00:03:33.420 --rc geninfo_all_blocks=1 00:03:33.420 --rc geninfo_unexecuted_blocks=1 00:03:33.420 00:03:33.420 ' 00:03:33.420 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:33.420 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2624829 00:03:33.420 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2624829 00:03:33.420 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2624829 ']' 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.420 16:15:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.677 [2024-11-04 16:15:00.255982] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:33.677 [2024-11-04 16:15:00.256036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624829 ] 00:03:33.677 [2024-11-04 16:15:00.321527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.677 [2024-11-04 16:15:00.361503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.935 16:15:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.935 16:15:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:33.935 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:34.193 16:15:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2624829 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2624829 ']' 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2624829 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2624829 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2624829' 00:03:34.193 killing process with pid 2624829 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 2624829 00:03:34.193 16:15:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 2624829 00:03:34.451 00:03:34.451 real 0m1.101s 00:03:34.451 user 0m1.137s 00:03:34.451 sys 0m0.392s 00:03:34.451 16:15:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.451 16:15:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.451 ************************************ 00:03:34.451 END TEST alias_rpc 00:03:34.451 ************************************ 00:03:34.451 16:15:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:34.451 16:15:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:34.451 16:15:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.451 16:15:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.451 16:15:01 -- common/autotest_common.sh@10 -- # set +x 00:03:34.451 ************************************ 00:03:34.451 START TEST spdkcli_tcp 00:03:34.451 ************************************ 00:03:34.451 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:34.451 * Looking for test storage... 00:03:34.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.709 16:15:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.709 --rc genhtml_branch_coverage=1 00:03:34.709 --rc genhtml_function_coverage=1 00:03:34.709 --rc genhtml_legend=1 00:03:34.709 --rc geninfo_all_blocks=1 00:03:34.709 --rc geninfo_unexecuted_blocks=1 00:03:34.709 00:03:34.709 ' 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.709 --rc genhtml_branch_coverage=1 00:03:34.709 --rc genhtml_function_coverage=1 00:03:34.709 --rc genhtml_legend=1 00:03:34.709 --rc geninfo_all_blocks=1 00:03:34.709 --rc geninfo_unexecuted_blocks=1 00:03:34.709 00:03:34.709 ' 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.709 --rc genhtml_branch_coverage=1 00:03:34.709 --rc genhtml_function_coverage=1 00:03:34.709 --rc genhtml_legend=1 00:03:34.709 --rc geninfo_all_blocks=1 00:03:34.709 --rc geninfo_unexecuted_blocks=1 00:03:34.709 00:03:34.709 ' 00:03:34.709 16:15:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.709 --rc genhtml_branch_coverage=1 00:03:34.709 --rc genhtml_function_coverage=1 00:03:34.709 --rc genhtml_legend=1 00:03:34.709 --rc geninfo_all_blocks=1 00:03:34.709 --rc geninfo_unexecuted_blocks=1 00:03:34.709 00:03:34.709 ' 00:03:34.709 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2625185 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2625185 00:03:34.710 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2625185 ']' 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.710 16:15:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:34.710 [2024-11-04 16:15:01.429559] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:34.710 [2024-11-04 16:15:01.429617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625185 ] 00:03:34.710 [2024-11-04 16:15:01.492282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:34.967 [2024-11-04 16:15:01.538619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:34.967 [2024-11-04 16:15:01.538623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.967 16:15:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.967 16:15:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:34.967 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2625213 00:03:34.967 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:34.967 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:35.225 [ 00:03:35.225 "bdev_malloc_delete", 00:03:35.225 "bdev_malloc_create", 00:03:35.225 "bdev_null_resize", 00:03:35.225 "bdev_null_delete", 00:03:35.225 "bdev_null_create", 00:03:35.225 "bdev_nvme_cuse_unregister", 00:03:35.225 "bdev_nvme_cuse_register", 00:03:35.225 "bdev_opal_new_user", 00:03:35.225 "bdev_opal_set_lock_state", 00:03:35.225 "bdev_opal_delete", 00:03:35.225 "bdev_opal_get_info", 00:03:35.225 "bdev_opal_create", 00:03:35.225 "bdev_nvme_opal_revert", 00:03:35.225 "bdev_nvme_opal_init", 00:03:35.225 "bdev_nvme_send_cmd", 00:03:35.225 "bdev_nvme_set_keys", 00:03:35.225 "bdev_nvme_get_path_iostat", 00:03:35.225 "bdev_nvme_get_mdns_discovery_info", 00:03:35.225 "bdev_nvme_stop_mdns_discovery", 00:03:35.225 "bdev_nvme_start_mdns_discovery", 00:03:35.225 "bdev_nvme_set_multipath_policy", 00:03:35.225 "bdev_nvme_set_preferred_path", 00:03:35.225 "bdev_nvme_get_io_paths", 00:03:35.225 "bdev_nvme_remove_error_injection", 00:03:35.225 "bdev_nvme_add_error_injection", 00:03:35.225 "bdev_nvme_get_discovery_info", 00:03:35.225 "bdev_nvme_stop_discovery", 00:03:35.225 "bdev_nvme_start_discovery", 00:03:35.225 "bdev_nvme_get_controller_health_info", 00:03:35.225 "bdev_nvme_disable_controller", 00:03:35.225 "bdev_nvme_enable_controller", 00:03:35.225 "bdev_nvme_reset_controller", 00:03:35.225 "bdev_nvme_get_transport_statistics", 00:03:35.225 "bdev_nvme_apply_firmware", 00:03:35.225 "bdev_nvme_detach_controller", 00:03:35.225 "bdev_nvme_get_controllers", 00:03:35.225 "bdev_nvme_attach_controller", 00:03:35.225 "bdev_nvme_set_hotplug", 00:03:35.226 "bdev_nvme_set_options", 00:03:35.226 "bdev_passthru_delete", 00:03:35.226 "bdev_passthru_create", 00:03:35.226 "bdev_lvol_set_parent_bdev", 00:03:35.226 "bdev_lvol_set_parent", 00:03:35.226 "bdev_lvol_check_shallow_copy", 00:03:35.226 "bdev_lvol_start_shallow_copy", 00:03:35.226 "bdev_lvol_grow_lvstore", 00:03:35.226 "bdev_lvol_get_lvols", 00:03:35.226 "bdev_lvol_get_lvstores", 00:03:35.226 "bdev_lvol_delete", 00:03:35.226 "bdev_lvol_set_read_only", 00:03:35.226 "bdev_lvol_resize", 00:03:35.226 "bdev_lvol_decouple_parent", 00:03:35.226 "bdev_lvol_inflate", 00:03:35.226 "bdev_lvol_rename", 00:03:35.226 "bdev_lvol_clone_bdev", 00:03:35.226 "bdev_lvol_clone", 00:03:35.226 "bdev_lvol_snapshot", 00:03:35.226 "bdev_lvol_create", 00:03:35.226 "bdev_lvol_delete_lvstore", 00:03:35.226 "bdev_lvol_rename_lvstore", 00:03:35.226 "bdev_lvol_create_lvstore", 00:03:35.226 "bdev_raid_set_options", 00:03:35.226 "bdev_raid_remove_base_bdev", 00:03:35.226 "bdev_raid_add_base_bdev", 00:03:35.226 "bdev_raid_delete", 00:03:35.226 "bdev_raid_create", 00:03:35.226 "bdev_raid_get_bdevs", 00:03:35.226 "bdev_error_inject_error", 00:03:35.226 "bdev_error_delete", 00:03:35.226 "bdev_error_create", 00:03:35.226 "bdev_split_delete", 00:03:35.226 "bdev_split_create", 00:03:35.226 "bdev_delay_delete", 00:03:35.226 "bdev_delay_create", 00:03:35.226 "bdev_delay_update_latency", 00:03:35.226 "bdev_zone_block_delete", 00:03:35.226 "bdev_zone_block_create", 00:03:35.226 "blobfs_create", 00:03:35.226 "blobfs_detect", 00:03:35.226 "blobfs_set_cache_size", 00:03:35.226 "bdev_aio_delete", 00:03:35.226 "bdev_aio_rescan", 00:03:35.226 "bdev_aio_create", 00:03:35.226 "bdev_ftl_set_property", 00:03:35.226 "bdev_ftl_get_properties", 00:03:35.226 "bdev_ftl_get_stats", 00:03:35.226 "bdev_ftl_unmap", 00:03:35.226 "bdev_ftl_unload", 00:03:35.226 "bdev_ftl_delete", 00:03:35.226 "bdev_ftl_load", 00:03:35.226 "bdev_ftl_create", 00:03:35.226 "bdev_virtio_attach_controller", 00:03:35.226 "bdev_virtio_scsi_get_devices", 00:03:35.226 "bdev_virtio_detach_controller", 00:03:35.226 "bdev_virtio_blk_set_hotplug", 00:03:35.226 "bdev_iscsi_delete", 00:03:35.226 "bdev_iscsi_create", 00:03:35.226 "bdev_iscsi_set_options", 00:03:35.226 "accel_error_inject_error", 00:03:35.226 "ioat_scan_accel_module", 00:03:35.226 "dsa_scan_accel_module", 00:03:35.226 "iaa_scan_accel_module", 00:03:35.226 "vfu_virtio_create_fs_endpoint", 00:03:35.226 "vfu_virtio_create_scsi_endpoint", 00:03:35.226 "vfu_virtio_scsi_remove_target", 00:03:35.226 "vfu_virtio_scsi_add_target", 00:03:35.226 "vfu_virtio_create_blk_endpoint", 00:03:35.226 "vfu_virtio_delete_endpoint", 00:03:35.226 "keyring_file_remove_key", 00:03:35.226 "keyring_file_add_key", 00:03:35.226 "keyring_linux_set_options", 00:03:35.226 "fsdev_aio_delete", 00:03:35.226 "fsdev_aio_create", 00:03:35.226 "iscsi_get_histogram", 00:03:35.226 "iscsi_enable_histogram", 00:03:35.226 "iscsi_set_options", 00:03:35.226 "iscsi_get_auth_groups", 00:03:35.226 "iscsi_auth_group_remove_secret", 00:03:35.226 "iscsi_auth_group_add_secret", 00:03:35.226 "iscsi_delete_auth_group", 00:03:35.226 "iscsi_create_auth_group", 00:03:35.226 "iscsi_set_discovery_auth", 00:03:35.226 "iscsi_get_options", 00:03:35.226 "iscsi_target_node_request_logout", 00:03:35.226 "iscsi_target_node_set_redirect", 00:03:35.226 "iscsi_target_node_set_auth", 00:03:35.226 "iscsi_target_node_add_lun", 00:03:35.226 "iscsi_get_stats", 00:03:35.226 "iscsi_get_connections", 00:03:35.226 "iscsi_portal_group_set_auth", 00:03:35.226 "iscsi_start_portal_group", 00:03:35.226 "iscsi_delete_portal_group", 00:03:35.226 "iscsi_create_portal_group", 00:03:35.226 "iscsi_get_portal_groups", 00:03:35.226 "iscsi_delete_target_node", 00:03:35.226 "iscsi_target_node_remove_pg_ig_maps", 00:03:35.226 "iscsi_target_node_add_pg_ig_maps", 00:03:35.226 "iscsi_create_target_node", 00:03:35.226 "iscsi_get_target_nodes", 00:03:35.226 "iscsi_delete_initiator_group", 00:03:35.226 "iscsi_initiator_group_remove_initiators", 00:03:35.226 "iscsi_initiator_group_add_initiators", 00:03:35.226 "iscsi_create_initiator_group", 00:03:35.226 "iscsi_get_initiator_groups", 00:03:35.226 "nvmf_set_crdt", 00:03:35.226 "nvmf_set_config", 00:03:35.226 "nvmf_set_max_subsystems", 00:03:35.226 "nvmf_stop_mdns_prr", 00:03:35.226 "nvmf_publish_mdns_prr", 00:03:35.226 "nvmf_subsystem_get_listeners", 00:03:35.226 "nvmf_subsystem_get_qpairs", 00:03:35.226 "nvmf_subsystem_get_controllers", 00:03:35.226 "nvmf_get_stats", 00:03:35.226 "nvmf_get_transports", 00:03:35.226 "nvmf_create_transport", 00:03:35.226 "nvmf_get_targets", 00:03:35.226 "nvmf_delete_target", 00:03:35.226 "nvmf_create_target", 00:03:35.226 "nvmf_subsystem_allow_any_host", 00:03:35.226 "nvmf_subsystem_set_keys", 00:03:35.226 "nvmf_subsystem_remove_host", 00:03:35.226 "nvmf_subsystem_add_host", 00:03:35.226 "nvmf_ns_remove_host", 00:03:35.226 "nvmf_ns_add_host", 00:03:35.226 "nvmf_subsystem_remove_ns", 00:03:35.226 "nvmf_subsystem_set_ns_ana_group", 00:03:35.226 "nvmf_subsystem_add_ns", 00:03:35.226 "nvmf_subsystem_listener_set_ana_state", 00:03:35.226 "nvmf_discovery_get_referrals", 00:03:35.226 "nvmf_discovery_remove_referral", 00:03:35.226 "nvmf_discovery_add_referral", 00:03:35.226 "nvmf_subsystem_remove_listener", 00:03:35.226 "nvmf_subsystem_add_listener", 00:03:35.226 "nvmf_delete_subsystem", 00:03:35.226 "nvmf_create_subsystem", 00:03:35.226 "nvmf_get_subsystems", 00:03:35.226 "env_dpdk_get_mem_stats", 00:03:35.226 "nbd_get_disks", 00:03:35.226 "nbd_stop_disk", 00:03:35.226 "nbd_start_disk", 00:03:35.226 "ublk_recover_disk", 00:03:35.226 "ublk_get_disks", 00:03:35.226 "ublk_stop_disk", 00:03:35.226 "ublk_start_disk", 00:03:35.226 "ublk_destroy_target", 00:03:35.226 "ublk_create_target", 00:03:35.226 "virtio_blk_create_transport", 00:03:35.226 "virtio_blk_get_transports", 00:03:35.226 "vhost_controller_set_coalescing", 00:03:35.226 "vhost_get_controllers", 00:03:35.226 "vhost_delete_controller", 00:03:35.226 "vhost_create_blk_controller", 00:03:35.226 "vhost_scsi_controller_remove_target", 00:03:35.226 "vhost_scsi_controller_add_target", 00:03:35.226 "vhost_start_scsi_controller", 00:03:35.226 "vhost_create_scsi_controller", 00:03:35.226 "thread_set_cpumask", 00:03:35.226 "scheduler_set_options", 00:03:35.226 "framework_get_governor", 00:03:35.226 "framework_get_scheduler", 00:03:35.226 "framework_set_scheduler", 00:03:35.226 "framework_get_reactors", 00:03:35.226 "thread_get_io_channels", 00:03:35.226 "thread_get_pollers", 00:03:35.226 "thread_get_stats", 00:03:35.226 "framework_monitor_context_switch", 00:03:35.226 "spdk_kill_instance", 00:03:35.226 "log_enable_timestamps", 00:03:35.226 "log_get_flags", 00:03:35.226 "log_clear_flag", 00:03:35.226 "log_set_flag", 00:03:35.226 "log_get_level", 00:03:35.226 "log_set_level", 00:03:35.226 "log_get_print_level", 00:03:35.226 "log_set_print_level", 00:03:35.226 "framework_enable_cpumask_locks", 00:03:35.226 "framework_disable_cpumask_locks", 00:03:35.226 "framework_wait_init", 00:03:35.226 "framework_start_init", 00:03:35.226 "scsi_get_devices", 00:03:35.226 "bdev_get_histogram", 00:03:35.226 "bdev_enable_histogram", 00:03:35.226 "bdev_set_qos_limit", 00:03:35.226 "bdev_set_qd_sampling_period", 00:03:35.226 "bdev_get_bdevs", 00:03:35.226 "bdev_reset_iostat", 00:03:35.226 "bdev_get_iostat", 00:03:35.226 "bdev_examine", 00:03:35.226 "bdev_wait_for_examine", 00:03:35.226 "bdev_set_options", 00:03:35.226 "accel_get_stats", 00:03:35.226 "accel_set_options", 00:03:35.226 "accel_set_driver", 00:03:35.226 "accel_crypto_key_destroy", 00:03:35.226 "accel_crypto_keys_get", 00:03:35.226 "accel_crypto_key_create", 00:03:35.226 "accel_assign_opc", 00:03:35.226 "accel_get_module_info", 00:03:35.226 "accel_get_opc_assignments", 00:03:35.226 "vmd_rescan", 00:03:35.226 "vmd_remove_device", 00:03:35.226 "vmd_enable", 00:03:35.226 "sock_get_default_impl", 00:03:35.226 "sock_set_default_impl", 00:03:35.226 "sock_impl_set_options", 00:03:35.226 "sock_impl_get_options", 00:03:35.226 "iobuf_get_stats", 00:03:35.226 "iobuf_set_options", 00:03:35.226 "keyring_get_keys", 00:03:35.226 "vfu_tgt_set_base_path", 00:03:35.226 "framework_get_pci_devices", 00:03:35.226 "framework_get_config", 00:03:35.226 "framework_get_subsystems", 00:03:35.226 "fsdev_set_opts", 00:03:35.226 "fsdev_get_opts", 00:03:35.226 "trace_get_info", 00:03:35.226 "trace_get_tpoint_group_mask", 00:03:35.226 "trace_disable_tpoint_group", 00:03:35.226 "trace_enable_tpoint_group", 00:03:35.226 "trace_clear_tpoint_mask", 00:03:35.226 "trace_set_tpoint_mask", 00:03:35.226 "notify_get_notifications", 00:03:35.226 "notify_get_types", 00:03:35.226 "spdk_get_version", 00:03:35.226 "rpc_get_methods" 00:03:35.226 ] 00:03:35.226 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:35.226 16:15:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:35.226 16:15:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:35.226 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:35.226 16:15:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2625185 00:03:35.226 16:15:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2625185 ']' 00:03:35.226 16:15:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2625185 00:03:35.226 16:15:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:35.227 16:15:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.227 16:15:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625185 00:03:35.227 16:15:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.227 16:15:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.227 16:15:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625185' 00:03:35.227 killing process with pid 2625185 00:03:35.227 16:15:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2625185 00:03:35.227 16:15:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2625185 00:03:35.788 00:03:35.788 real 0m1.143s 00:03:35.788 user 0m1.964s 00:03:35.788 sys 0m0.423s 00:03:35.788 16:15:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.788 16:15:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:35.788 ************************************ 00:03:35.788 END TEST spdkcli_tcp 00:03:35.788 ************************************ 00:03:35.788 16:15:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:35.788 16:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.788 16:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.788 16:15:02 -- common/autotest_common.sh@10 -- # set +x 00:03:35.788 ************************************ 00:03:35.788 START TEST dpdk_mem_utility 00:03:35.788 ************************************ 00:03:35.788 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:35.788 * Looking for test storage... 00:03:35.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:35.788 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.788 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.788 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.788 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.788 16:15:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.789 16:15:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.789 --rc genhtml_branch_coverage=1 00:03:35.789 --rc genhtml_function_coverage=1 00:03:35.789 --rc genhtml_legend=1 00:03:35.789 --rc geninfo_all_blocks=1 00:03:35.789 --rc geninfo_unexecuted_blocks=1 00:03:35.789 00:03:35.789 ' 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.789 --rc genhtml_branch_coverage=1 00:03:35.789 --rc genhtml_function_coverage=1 00:03:35.789 --rc genhtml_legend=1 00:03:35.789 --rc geninfo_all_blocks=1 00:03:35.789 --rc geninfo_unexecuted_blocks=1 00:03:35.789 00:03:35.789 ' 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.789 --rc genhtml_branch_coverage=1 00:03:35.789 --rc genhtml_function_coverage=1 00:03:35.789 --rc genhtml_legend=1 00:03:35.789 --rc geninfo_all_blocks=1 00:03:35.789 --rc geninfo_unexecuted_blocks=1 00:03:35.789 00:03:35.789 ' 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.789 --rc genhtml_branch_coverage=1 00:03:35.789 --rc genhtml_function_coverage=1 00:03:35.789 --rc genhtml_legend=1 00:03:35.789 --rc geninfo_all_blocks=1 00:03:35.789 --rc geninfo_unexecuted_blocks=1 00:03:35.789 00:03:35.789 ' 00:03:35.789 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:35.789 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2625475 00:03:35.789 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2625475 00:03:35.789 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2625475 ']' 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.789 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:36.046 [2024-11-04 16:15:02.624991] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:36.046 [2024-11-04 16:15:02.625040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625475 ] 00:03:36.046 [2024-11-04 16:15:02.688964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.046 [2024-11-04 16:15:02.730846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.305 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.305 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:36.305 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:36.305 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:36.305 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.305 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:36.305 { 00:03:36.305 "filename": "/tmp/spdk_mem_dump.txt" 00:03:36.305 } 00:03:36.305 16:15:02 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.305 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:36.305 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:36.305 1 heaps totaling size 810.000000 MiB 00:03:36.305 size: 810.000000 MiB heap id: 0 00:03:36.305 end heaps---------- 00:03:36.305 9 mempools totaling size 595.772034 MiB 00:03:36.305 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:36.305 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:36.305 size: 92.545471 MiB name: bdev_io_2625475 00:03:36.305 size: 50.003479 MiB name: msgpool_2625475 00:03:36.305 size: 36.509338 MiB name: fsdev_io_2625475 00:03:36.305 size: 21.763794 MiB name: PDU_Pool 00:03:36.305 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:36.305 size: 4.133484 MiB name: evtpool_2625475 00:03:36.305 size: 0.026123 MiB name: Session_Pool 00:03:36.305 end mempools------- 00:03:36.305 6 memzones totaling size 4.142822 MiB 00:03:36.305 size: 1.000366 MiB name: RG_ring_0_2625475 00:03:36.305 size: 1.000366 MiB name: RG_ring_1_2625475 00:03:36.305 size: 1.000366 MiB name: RG_ring_4_2625475 00:03:36.305 size: 1.000366 MiB name: RG_ring_5_2625475 00:03:36.305 size: 0.125366 MiB name: RG_ring_2_2625475 00:03:36.305 size: 0.015991 MiB name: RG_ring_3_2625475 00:03:36.305 end memzones------- 00:03:36.305 16:15:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:36.305 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:36.305 list of free elements. size: 10.862488 MiB 00:03:36.305 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:36.305 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:36.305 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:36.305 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:36.305 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:36.305 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:36.305 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:36.305 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:36.305 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:36.305 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:36.305 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:36.305 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:36.305 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:36.305 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:36.305 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:36.305 list of standard malloc elements. size: 199.218628 MiB 00:03:36.305 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:36.305 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:36.305 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:36.305 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:36.305 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:36.305 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:36.305 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:36.305 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:36.305 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:36.305 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:36.305 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:36.305 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:36.305 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:36.305 list of memzone associated elements. size: 599.918884 MiB 00:03:36.305 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:36.305 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:36.305 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:36.305 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:36.305 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:36.305 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2625475_0 00:03:36.305 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:36.305 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2625475_0 00:03:36.305 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:36.305 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2625475_0 00:03:36.305 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:36.305 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:36.305 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:36.305 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:36.305 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:36.305 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2625475_0 00:03:36.305 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:36.305 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2625475 00:03:36.305 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:36.305 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2625475 00:03:36.305 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:36.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:36.306 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:36.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:36.306 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:36.306 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:36.306 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:36.306 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:36.306 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:36.306 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2625475 00:03:36.306 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:36.306 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2625475 00:03:36.306 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:36.306 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2625475 00:03:36.306 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:36.306 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2625475 00:03:36.306 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:36.306 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2625475 00:03:36.306 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:36.306 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2625475 00:03:36.306 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:36.306 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:36.306 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:36.306 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:36.306 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:36.306 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:36.306 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:36.306 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2625475 00:03:36.306 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:36.306 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2625475 00:03:36.306 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:36.306 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:36.306 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:36.306 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:36.306 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:36.306 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2625475 00:03:36.306 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:36.306 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:36.306 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:36.306 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2625475 00:03:36.306 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:36.306 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2625475 00:03:36.306 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:36.306 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2625475 00:03:36.306 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:36.306 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:36.306 16:15:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:36.306 16:15:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2625475 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2625475 ']' 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2625475 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625475 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625475' 00:03:36.306 killing process with pid 2625475 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2625475 00:03:36.306 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2625475 00:03:36.564 00:03:36.564 real 0m0.974s 00:03:36.564 user 0m0.911s 00:03:36.564 sys 0m0.389s 00:03:36.564 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.564 16:15:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:36.564 ************************************ 00:03:36.564 END TEST dpdk_mem_utility 00:03:36.564 ************************************ 00:03:36.820 16:15:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:36.820 16:15:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.820 16:15:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.820 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:36.820 ************************************ 00:03:36.820 START TEST event 00:03:36.820 ************************************ 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:36.820 * Looking for test storage... 00:03:36.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.820 16:15:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.820 16:15:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.820 16:15:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.820 16:15:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.820 16:15:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.820 16:15:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.820 16:15:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.820 16:15:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.820 16:15:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.820 16:15:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.820 16:15:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.820 16:15:03 event -- scripts/common.sh@344 -- # case "$op" in 00:03:36.820 16:15:03 event -- scripts/common.sh@345 -- # : 1 00:03:36.820 16:15:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.820 16:15:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.820 16:15:03 event -- scripts/common.sh@365 -- # decimal 1 00:03:36.820 16:15:03 event -- scripts/common.sh@353 -- # local d=1 00:03:36.820 16:15:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.820 16:15:03 event -- scripts/common.sh@355 -- # echo 1 00:03:36.820 16:15:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.820 16:15:03 event -- scripts/common.sh@366 -- # decimal 2 00:03:36.820 16:15:03 event -- scripts/common.sh@353 -- # local d=2 00:03:36.820 16:15:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.820 16:15:03 event -- scripts/common.sh@355 -- # echo 2 00:03:36.820 16:15:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.820 16:15:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.820 16:15:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.820 16:15:03 event -- scripts/common.sh@368 -- # return 0 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.820 --rc genhtml_branch_coverage=1 00:03:36.820 --rc genhtml_function_coverage=1 00:03:36.820 --rc genhtml_legend=1 00:03:36.820 --rc geninfo_all_blocks=1 00:03:36.820 --rc geninfo_unexecuted_blocks=1 00:03:36.820 00:03:36.820 ' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.820 --rc genhtml_branch_coverage=1 00:03:36.820 --rc genhtml_function_coverage=1 00:03:36.820 --rc genhtml_legend=1 00:03:36.820 --rc geninfo_all_blocks=1 00:03:36.820 --rc geninfo_unexecuted_blocks=1 00:03:36.820 00:03:36.820 ' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.820 --rc genhtml_branch_coverage=1 00:03:36.820 --rc genhtml_function_coverage=1 00:03:36.820 --rc genhtml_legend=1 00:03:36.820 --rc geninfo_all_blocks=1 00:03:36.820 --rc geninfo_unexecuted_blocks=1 00:03:36.820 00:03:36.820 ' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.820 --rc genhtml_branch_coverage=1 00:03:36.820 --rc genhtml_function_coverage=1 00:03:36.820 --rc genhtml_legend=1 00:03:36.820 --rc geninfo_all_blocks=1 00:03:36.820 --rc geninfo_unexecuted_blocks=1 00:03:36.820 00:03:36.820 ' 00:03:36.820 16:15:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:36.820 16:15:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:36.820 16:15:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:36.820 16:15:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.820 16:15:03 event -- common/autotest_common.sh@10 -- # set +x 00:03:37.077 ************************************ 00:03:37.077 START TEST event_perf 00:03:37.077 ************************************ 00:03:37.077 16:15:03 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:37.077 Running I/O for 1 seconds...[2024-11-04 16:15:03.673406] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:37.077 [2024-11-04 16:15:03.673476] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625604 ] 00:03:37.077 [2024-11-04 16:15:03.742245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:37.077 [2024-11-04 16:15:03.786020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:37.077 [2024-11-04 16:15:03.786121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:37.077 [2024-11-04 16:15:03.786366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:37.077 [2024-11-04 16:15:03.786369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.010 Running I/O for 1 seconds... 00:03:38.010 lcore 0: 208134 00:03:38.010 lcore 1: 208134 00:03:38.010 lcore 2: 208132 00:03:38.010 lcore 3: 208133 00:03:38.010 done. 00:03:38.010 00:03:38.010 real 0m1.174s 00:03:38.010 user 0m4.096s 00:03:38.010 sys 0m0.074s 00:03:38.010 16:15:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:38.010 16:15:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:38.010 ************************************ 00:03:38.010 END TEST event_perf 00:03:38.010 ************************************ 00:03:38.268 16:15:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:38.268 16:15:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:38.268 16:15:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:38.268 16:15:04 event -- common/autotest_common.sh@10 -- # set +x 00:03:38.268 ************************************ 00:03:38.268 START TEST event_reactor 00:03:38.268 ************************************ 00:03:38.268 16:15:04 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:38.268 [2024-11-04 16:15:04.909797] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:38.268 [2024-11-04 16:15:04.909866] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625861 ] 00:03:38.268 [2024-11-04 16:15:04.976220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.268 [2024-11-04 16:15:05.015279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.640 test_start 00:03:39.640 oneshot 00:03:39.640 tick 100 00:03:39.640 tick 100 00:03:39.640 tick 250 00:03:39.640 tick 100 00:03:39.640 tick 100 00:03:39.640 tick 250 00:03:39.640 tick 100 00:03:39.640 tick 500 00:03:39.640 tick 100 00:03:39.640 tick 100 00:03:39.640 tick 250 00:03:39.640 tick 100 00:03:39.640 tick 100 00:03:39.640 test_end 00:03:39.640 00:03:39.640 real 0m1.165s 00:03:39.640 user 0m1.091s 00:03:39.640 sys 0m0.069s 00:03:39.640 16:15:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.640 16:15:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:39.640 ************************************ 00:03:39.641 END TEST event_reactor 00:03:39.641 ************************************ 00:03:39.641 16:15:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:39.641 16:15:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:39.641 16:15:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.641 16:15:06 event -- common/autotest_common.sh@10 -- # set +x 00:03:39.641 ************************************ 00:03:39.641 START TEST event_reactor_perf 00:03:39.641 ************************************ 00:03:39.641 16:15:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:39.641 [2024-11-04 16:15:06.136179] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:39.641 [2024-11-04 16:15:06.136249] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626108 ] 00:03:39.641 [2024-11-04 16:15:06.200248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.641 [2024-11-04 16:15:06.239737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.575 test_start 00:03:40.575 test_end 00:03:40.575 Performance: 514957 events per second 00:03:40.575 00:03:40.575 real 0m1.162s 00:03:40.575 user 0m1.097s 00:03:40.575 sys 0m0.060s 00:03:40.575 16:15:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.575 16:15:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:40.575 ************************************ 00:03:40.575 END TEST event_reactor_perf 00:03:40.575 ************************************ 00:03:40.575 16:15:07 event -- event/event.sh@49 -- # uname -s 00:03:40.575 16:15:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:40.575 16:15:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:40.575 16:15:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.575 16:15:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.575 16:15:07 event -- common/autotest_common.sh@10 -- # set +x 00:03:40.575 ************************************ 00:03:40.575 START TEST event_scheduler 00:03:40.575 ************************************ 00:03:40.575 16:15:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:40.833 * Looking for test storage... 00:03:40.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.833 16:15:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.833 --rc genhtml_branch_coverage=1 00:03:40.833 --rc genhtml_function_coverage=1 00:03:40.833 --rc genhtml_legend=1 00:03:40.833 --rc geninfo_all_blocks=1 00:03:40.833 --rc geninfo_unexecuted_blocks=1 00:03:40.833 00:03:40.833 ' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.833 --rc genhtml_branch_coverage=1 00:03:40.833 --rc genhtml_function_coverage=1 00:03:40.833 --rc genhtml_legend=1 00:03:40.833 --rc geninfo_all_blocks=1 00:03:40.833 --rc geninfo_unexecuted_blocks=1 00:03:40.833 00:03:40.833 ' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.833 --rc genhtml_branch_coverage=1 00:03:40.833 --rc genhtml_function_coverage=1 00:03:40.833 --rc genhtml_legend=1 00:03:40.833 --rc geninfo_all_blocks=1 00:03:40.833 --rc geninfo_unexecuted_blocks=1 00:03:40.833 00:03:40.833 ' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.833 --rc genhtml_branch_coverage=1 00:03:40.833 --rc genhtml_function_coverage=1 00:03:40.833 --rc genhtml_legend=1 00:03:40.833 --rc geninfo_all_blocks=1 00:03:40.833 --rc geninfo_unexecuted_blocks=1 00:03:40.833 00:03:40.833 ' 00:03:40.833 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:40.833 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2626394 00:03:40.833 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.833 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:40.833 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2626394 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2626394 ']' 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.833 16:15:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:40.833 [2024-11-04 16:15:07.551696] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:40.833 [2024-11-04 16:15:07.551745] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626394 ] 00:03:40.833 [2024-11-04 16:15:07.610349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:40.833 [2024-11-04 16:15:07.652826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.833 [2024-11-04 16:15:07.652915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:40.833 [2024-11-04 16:15:07.653002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:40.833 [2024-11-04 16:15:07.653003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:41.092 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:41.092 [2024-11-04 16:15:07.709589] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:41.092 [2024-11-04 16:15:07.709615] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:41.092 [2024-11-04 16:15:07.709625] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:41.092 [2024-11-04 16:15:07.709631] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:41.092 [2024-11-04 16:15:07.709636] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.092 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:41.092 [2024-11-04 16:15:07.782464] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.092 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.092 16:15:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 ************************************ 00:03:41.093 START TEST scheduler_create_thread 00:03:41.093 ************************************ 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 2 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 3 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 4 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 5 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 6 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 7 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 8 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 9 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 10 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.093 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.378 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.378 16:15:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:41.378 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.378 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:42.756 16:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.756 16:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:42.756 16:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:42.756 16:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.756 16:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:43.691 16:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.691 00:03:43.691 real 0m2.617s 00:03:43.691 user 0m0.015s 00:03:43.691 sys 0m0.003s 00:03:43.691 16:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.691 16:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:43.691 ************************************ 00:03:43.691 END TEST scheduler_create_thread 00:03:43.691 ************************************ 00:03:43.691 16:15:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:43.691 16:15:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2626394 00:03:43.691 16:15:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2626394 ']' 00:03:43.691 16:15:10 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2626394 00:03:43.691 16:15:10 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:03:43.691 16:15:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.691 16:15:10 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626394 00:03:43.950 16:15:10 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:03:43.950 16:15:10 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:03:43.950 16:15:10 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626394' 00:03:43.950 killing process with pid 2626394 00:03:43.950 16:15:10 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2626394 00:03:43.950 16:15:10 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2626394 00:03:44.209 [2024-11-04 16:15:10.912726] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:44.468 00:03:44.468 real 0m3.730s 00:03:44.468 user 0m5.614s 00:03:44.468 sys 0m0.334s 00:03:44.468 16:15:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.468 16:15:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:44.468 ************************************ 00:03:44.468 END TEST event_scheduler 00:03:44.468 ************************************ 00:03:44.468 16:15:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:03:44.468 16:15:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:44.468 16:15:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.468 16:15:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.468 16:15:11 event -- common/autotest_common.sh@10 -- # set +x 00:03:44.468 ************************************ 00:03:44.468 START TEST app_repeat 00:03:44.468 ************************************ 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2627519 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2627519' 00:03:44.468 Process app_repeat pid: 2627519 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:44.468 spdk_app_start Round 0 00:03:44.468 16:15:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2627519 /var/tmp/spdk-nbd.sock 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2627519 ']' 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:44.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.468 16:15:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:44.468 [2024-11-04 16:15:11.191227] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:03:44.468 [2024-11-04 16:15:11.191292] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627519 ] 00:03:44.468 [2024-11-04 16:15:11.257425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:44.727 [2024-11-04 16:15:11.298683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.727 [2024-11-04 16:15:11.298687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.727 16:15:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:44.727 16:15:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:44.727 16:15:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:44.985 Malloc0 00:03:44.985 16:15:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:44.985 Malloc1 00:03:44.985 16:15:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:44.985 16:15:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:44.986 16:15:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:44.986 16:15:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:44.986 16:15:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:45.244 /dev/nbd0 00:03:45.244 16:15:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:45.244 16:15:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:45.244 1+0 records in 00:03:45.244 1+0 records out 00:03:45.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188365 s, 21.7 MB/s 00:03:45.244 16:15:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.244 16:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:45.244 16:15:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.244 16:15:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:45.244 16:15:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:45.244 16:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:45.244 16:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:45.244 16:15:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:45.503 /dev/nbd1 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:45.503 1+0 records in 00:03:45.503 1+0 records out 00:03:45.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187992 s, 21.8 MB/s 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:45.503 16:15:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:45.503 16:15:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:45.761 { 00:03:45.761 "nbd_device": "/dev/nbd0", 00:03:45.761 "bdev_name": "Malloc0" 00:03:45.761 }, 00:03:45.761 { 00:03:45.761 "nbd_device": "/dev/nbd1", 00:03:45.761 "bdev_name": "Malloc1" 00:03:45.761 } 00:03:45.761 ]' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:45.761 { 00:03:45.761 "nbd_device": "/dev/nbd0", 00:03:45.761 "bdev_name": "Malloc0" 00:03:45.761 }, 00:03:45.761 { 00:03:45.761 "nbd_device": "/dev/nbd1", 00:03:45.761 "bdev_name": "Malloc1" 00:03:45.761 } 00:03:45.761 ]' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:45.761 /dev/nbd1' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:45.761 /dev/nbd1' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:45.761 256+0 records in 00:03:45.761 256+0 records out 00:03:45.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102888 s, 102 MB/s 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:45.761 256+0 records in 00:03:45.761 256+0 records out 00:03:45.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01311 s, 80.0 MB/s 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:45.761 256+0 records in 00:03:45.761 256+0 records out 00:03:45.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156041 s, 67.2 MB/s 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:45.761 16:15:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:46.019 16:15:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:46.277 16:15:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:46.278 16:15:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:46.536 16:15:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:46.536 16:15:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:46.795 16:15:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:46.795 [2024-11-04 16:15:13.578732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:46.795 [2024-11-04 16:15:13.615726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:46.795 [2024-11-04 16:15:13.615732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.054 [2024-11-04 16:15:13.656484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:47.054 [2024-11-04 16:15:13.656523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:50.339 16:15:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:50.339 16:15:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:03:50.339 spdk_app_start Round 1 00:03:50.339 16:15:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2627519 /var/tmp/spdk-nbd.sock 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2627519 ']' 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:50.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.339 16:15:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:50.339 16:15:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:50.339 Malloc0 00:03:50.340 16:15:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:50.340 Malloc1 00:03:50.340 16:15:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.340 16:15:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:50.597 /dev/nbd0 00:03:50.597 16:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:50.597 16:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:50.597 16:15:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:50.597 16:15:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:50.597 16:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:50.597 16:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:50.598 1+0 records in 00:03:50.598 1+0 records out 00:03:50.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217939 s, 18.8 MB/s 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:50.598 16:15:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:50.598 16:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:50.598 16:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.598 16:15:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:50.856 /dev/nbd1 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:50.856 1+0 records in 00:03:50.856 1+0 records out 00:03:50.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182525 s, 22.4 MB/s 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:50.856 16:15:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:50.856 { 00:03:50.856 "nbd_device": "/dev/nbd0", 00:03:50.856 "bdev_name": "Malloc0" 00:03:50.856 }, 00:03:50.856 { 00:03:50.856 "nbd_device": "/dev/nbd1", 00:03:50.856 "bdev_name": "Malloc1" 00:03:50.856 } 00:03:50.856 ]' 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:50.856 { 00:03:50.856 "nbd_device": "/dev/nbd0", 00:03:50.856 "bdev_name": "Malloc0" 00:03:50.856 }, 00:03:50.856 { 00:03:50.856 "nbd_device": "/dev/nbd1", 00:03:50.856 "bdev_name": "Malloc1" 00:03:50.856 } 00:03:50.856 ]' 00:03:50.856 16:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:51.114 16:15:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:51.114 /dev/nbd1' 00:03:51.114 16:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:51.114 /dev/nbd1' 00:03:51.114 16:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:51.114 16:15:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:51.114 16:15:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:51.115 256+0 records in 00:03:51.115 256+0 records out 00:03:51.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104384 s, 100 MB/s 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:51.115 256+0 records in 00:03:51.115 256+0 records out 00:03:51.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134371 s, 78.0 MB/s 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:51.115 256+0 records in 00:03:51.115 256+0 records out 00:03:51.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146861 s, 71.4 MB/s 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:51.115 16:15:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:51.373 16:15:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:51.373 16:15:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:51.631 16:15:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:51.631 16:15:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:51.890 16:15:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:52.148 [2024-11-04 16:15:18.797801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:52.148 [2024-11-04 16:15:18.834484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:52.148 [2024-11-04 16:15:18.834486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.148 [2024-11-04 16:15:18.875984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:52.148 [2024-11-04 16:15:18.876027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:55.432 16:15:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:55.432 16:15:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:03:55.432 spdk_app_start Round 2 00:03:55.432 16:15:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2627519 /var/tmp/spdk-nbd.sock 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2627519 ']' 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:55.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.432 16:15:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:55.432 16:15:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.432 Malloc0 00:03:55.432 16:15:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.432 Malloc1 00:03:55.432 16:15:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.432 16:15:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:55.691 /dev/nbd0 00:03:55.691 16:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:55.691 16:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:55.692 1+0 records in 00:03:55.692 1+0 records out 00:03:55.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.6809e-05 s, 42.3 MB/s 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:55.692 16:15:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:55.692 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:55.692 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.692 16:15:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:55.951 /dev/nbd1 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:55.951 1+0 records in 00:03:55.951 1+0 records out 00:03:55.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229143 s, 17.9 MB/s 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:55.951 16:15:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.951 16:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:56.210 { 00:03:56.210 "nbd_device": "/dev/nbd0", 00:03:56.210 "bdev_name": "Malloc0" 00:03:56.210 }, 00:03:56.210 { 00:03:56.210 "nbd_device": "/dev/nbd1", 00:03:56.210 "bdev_name": "Malloc1" 00:03:56.210 } 00:03:56.210 ]' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:56.210 { 00:03:56.210 "nbd_device": "/dev/nbd0", 00:03:56.210 "bdev_name": "Malloc0" 00:03:56.210 }, 00:03:56.210 { 00:03:56.210 "nbd_device": "/dev/nbd1", 00:03:56.210 "bdev_name": "Malloc1" 00:03:56.210 } 00:03:56.210 ]' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:56.210 /dev/nbd1' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:56.210 /dev/nbd1' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:56.210 256+0 records in 00:03:56.210 256+0 records out 00:03:56.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101976 s, 103 MB/s 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:56.210 256+0 records in 00:03:56.210 256+0 records out 00:03:56.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014298 s, 73.3 MB/s 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:56.210 256+0 records in 00:03:56.210 256+0 records out 00:03:56.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147802 s, 70.9 MB/s 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:56.210 16:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:56.211 16:15:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:56.211 16:15:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:56.469 16:15:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:56.727 16:15:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.728 16:15:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:56.986 16:15:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:56.986 16:15:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:57.245 16:15:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:57.245 [2024-11-04 16:15:24.045352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:57.503 [2024-11-04 16:15:24.083179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.503 [2024-11-04 16:15:24.083181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.503 [2024-11-04 16:15:24.123757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:57.503 [2024-11-04 16:15:24.123798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:00.162 16:15:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2627519 /var/tmp/spdk-nbd.sock 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2627519 ']' 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:00.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.162 16:15:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:00.421 16:15:27 event.app_repeat -- event/event.sh@39 -- # killprocess 2627519 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2627519 ']' 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2627519 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627519 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627519' 00:04:00.421 killing process with pid 2627519 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2627519 00:04:00.421 16:15:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2627519 00:04:00.681 spdk_app_start is called in Round 0. 00:04:00.681 Shutdown signal received, stop current app iteration 00:04:00.681 Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 reinitialization... 00:04:00.681 spdk_app_start is called in Round 1. 00:04:00.681 Shutdown signal received, stop current app iteration 00:04:00.681 Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 reinitialization... 00:04:00.681 spdk_app_start is called in Round 2. 00:04:00.681 Shutdown signal received, stop current app iteration 00:04:00.681 Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 reinitialization... 00:04:00.681 spdk_app_start is called in Round 3. 00:04:00.681 Shutdown signal received, stop current app iteration 00:04:00.681 16:15:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:00.681 16:15:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:00.681 00:04:00.681 real 0m16.113s 00:04:00.681 user 0m35.280s 00:04:00.681 sys 0m2.490s 00:04:00.681 16:15:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.681 16:15:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.681 ************************************ 00:04:00.681 END TEST app_repeat 00:04:00.681 ************************************ 00:04:00.681 16:15:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:00.681 16:15:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:00.681 16:15:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.681 16:15:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.681 16:15:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:00.681 ************************************ 00:04:00.681 START TEST cpu_locks 00:04:00.681 ************************************ 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:00.681 * Looking for test storage... 00:04:00.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.681 16:15:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.681 --rc genhtml_branch_coverage=1 00:04:00.681 --rc genhtml_function_coverage=1 00:04:00.681 --rc genhtml_legend=1 00:04:00.681 --rc geninfo_all_blocks=1 00:04:00.681 --rc geninfo_unexecuted_blocks=1 00:04:00.681 00:04:00.681 ' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.681 --rc genhtml_branch_coverage=1 00:04:00.681 --rc genhtml_function_coverage=1 00:04:00.681 --rc genhtml_legend=1 00:04:00.681 --rc geninfo_all_blocks=1 00:04:00.681 --rc geninfo_unexecuted_blocks=1 00:04:00.681 00:04:00.681 ' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.681 --rc genhtml_branch_coverage=1 00:04:00.681 --rc genhtml_function_coverage=1 00:04:00.681 --rc genhtml_legend=1 00:04:00.681 --rc geninfo_all_blocks=1 00:04:00.681 --rc geninfo_unexecuted_blocks=1 00:04:00.681 00:04:00.681 ' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.681 --rc genhtml_branch_coverage=1 00:04:00.681 --rc genhtml_function_coverage=1 00:04:00.681 --rc genhtml_legend=1 00:04:00.681 --rc geninfo_all_blocks=1 00:04:00.681 --rc geninfo_unexecuted_blocks=1 00:04:00.681 00:04:00.681 ' 00:04:00.681 16:15:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:00.681 16:15:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:00.681 16:15:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:00.681 16:15:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.681 16:15:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:00.941 ************************************ 00:04:00.941 START TEST default_locks 00:04:00.941 ************************************ 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2630538 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2630538 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2630538 ']' 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.941 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:00.941 [2024-11-04 16:15:27.581384] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:00.941 [2024-11-04 16:15:27.581421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630538 ] 00:04:00.941 [2024-11-04 16:15:27.641829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.941 [2024-11-04 16:15:27.680999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.199 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.199 16:15:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:01.199 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2630538 00:04:01.199 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2630538 00:04:01.199 16:15:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:01.767 lslocks: write error 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2630538 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2630538 ']' 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2630538 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630538 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630538' 00:04:01.767 killing process with pid 2630538 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2630538 00:04:01.767 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2630538 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2630538 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2630538 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2630538 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2630538 ']' 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:02.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2630538) - No such process 00:04:02.025 ERROR: process (pid: 2630538) is no longer running 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:02.025 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:02.026 00:04:02.026 real 0m1.158s 00:04:02.026 user 0m1.141s 00:04:02.026 sys 0m0.512s 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.026 16:15:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:02.026 ************************************ 00:04:02.026 END TEST default_locks 00:04:02.026 ************************************ 00:04:02.026 16:15:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:02.026 16:15:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.026 16:15:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.026 16:15:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:02.026 ************************************ 00:04:02.026 START TEST default_locks_via_rpc 00:04:02.026 ************************************ 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2630738 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2630738 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2630738 ']' 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.026 16:15:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.026 [2024-11-04 16:15:28.789053] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:02.026 [2024-11-04 16:15:28.789092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630738 ] 00:04:02.284 [2024-11-04 16:15:28.850753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.284 [2024-11-04 16:15:28.892906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.284 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.284 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:02.284 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:02.284 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.284 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2630738 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2630738 00:04:02.542 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2630738 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2630738 ']' 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2630738 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630738 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630738' 00:04:02.800 killing process with pid 2630738 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2630738 00:04:02.800 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2630738 00:04:03.058 00:04:03.058 real 0m1.134s 00:04:03.058 user 0m1.112s 00:04:03.058 sys 0m0.501s 00:04:03.058 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.058 16:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.058 ************************************ 00:04:03.058 END TEST default_locks_via_rpc 00:04:03.058 ************************************ 00:04:03.316 16:15:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:03.316 16:15:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.316 16:15:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.316 16:15:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 ************************************ 00:04:03.316 START TEST non_locking_app_on_locked_coremask 00:04:03.316 ************************************ 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2630882 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2630882 /var/tmp/spdk.sock 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2630882 ']' 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.316 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:03.316 [2024-11-04 16:15:29.989110] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:03.316 [2024-11-04 16:15:29.989152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630882 ] 00:04:03.316 [2024-11-04 16:15:30.053792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.316 [2024-11-04 16:15:30.098369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2631055 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2631055 /var/tmp/spdk2.sock 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2631055 ']' 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:03.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.575 16:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:03.575 [2024-11-04 16:15:30.365069] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:03.575 [2024-11-04 16:15:30.365119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631055 ] 00:04:03.834 [2024-11-04 16:15:30.455893] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:03.834 [2024-11-04 16:15:30.455919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.834 [2024-11-04 16:15:30.543823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.402 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.402 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:04.402 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2630882 00:04:04.402 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2630882 00:04:04.402 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:05.118 lslocks: write error 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2630882 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2630882 ']' 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2630882 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630882 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630882' 00:04:05.118 killing process with pid 2630882 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2630882 00:04:05.118 16:15:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2630882 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2631055 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2631055 ']' 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2631055 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631055 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631055' 00:04:05.687 killing process with pid 2631055 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2631055 00:04:05.687 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2631055 00:04:05.946 00:04:05.946 real 0m2.650s 00:04:05.946 user 0m2.778s 00:04:05.946 sys 0m0.885s 00:04:05.946 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.946 16:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:05.946 ************************************ 00:04:05.946 END TEST non_locking_app_on_locked_coremask 00:04:05.946 ************************************ 00:04:05.946 16:15:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:05.946 16:15:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.946 16:15:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.946 16:15:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:05.946 ************************************ 00:04:05.946 START TEST locking_app_on_unlocked_coremask 00:04:05.946 ************************************ 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2631357 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2631357 /var/tmp/spdk.sock 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2631357 ']' 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.946 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:05.946 [2024-11-04 16:15:32.698734] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:05.946 [2024-11-04 16:15:32.698775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631357 ] 00:04:05.946 [2024-11-04 16:15:32.760731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:05.946 [2024-11-04 16:15:32.760757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.205 [2024-11-04 16:15:32.803189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2631560 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2631560 /var/tmp/spdk2.sock 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2631560 ']' 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:06.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:06.205 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.206 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:06.464 [2024-11-04 16:15:33.063383] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:06.464 [2024-11-04 16:15:33.063429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631560 ] 00:04:06.464 [2024-11-04 16:15:33.149090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.464 [2024-11-04 16:15:33.229367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.401 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.401 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:07.401 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2631560 00:04:07.401 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:07.401 16:15:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2631560 00:04:07.401 lslocks: write error 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2631357 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2631357 ']' 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2631357 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631357 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631357' 00:04:07.401 killing process with pid 2631357 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2631357 00:04:07.401 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2631357 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2631560 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2631560 ']' 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2631560 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.969 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631560 00:04:08.228 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.228 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.228 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631560' 00:04:08.228 killing process with pid 2631560 00:04:08.228 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2631560 00:04:08.228 16:15:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2631560 00:04:08.487 00:04:08.487 real 0m2.478s 00:04:08.487 user 0m2.628s 00:04:08.487 sys 0m0.796s 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:08.487 ************************************ 00:04:08.487 END TEST locking_app_on_unlocked_coremask 00:04:08.487 ************************************ 00:04:08.487 16:15:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:08.487 16:15:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.487 16:15:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.487 16:15:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:08.487 ************************************ 00:04:08.487 START TEST locking_app_on_locked_coremask 00:04:08.487 ************************************ 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2631836 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2631836 /var/tmp/spdk.sock 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2631836 ']' 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.487 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:08.487 [2024-11-04 16:15:35.243626] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:08.488 [2024-11-04 16:15:35.243666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631836 ] 00:04:08.488 [2024-11-04 16:15:35.305068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.747 [2024-11-04 16:15:35.347200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2631953 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2631953 /var/tmp/spdk2.sock 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2631953 /var/tmp/spdk2.sock 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2631953 /var/tmp/spdk2.sock 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2631953 ']' 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.747 16:15:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:09.006 [2024-11-04 16:15:35.608910] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:09.006 [2024-11-04 16:15:35.608957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631953 ] 00:04:09.006 [2024-11-04 16:15:35.699663] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2631836 has claimed it. 00:04:09.006 [2024-11-04 16:15:35.699700] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:09.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2631953) - No such process 00:04:09.573 ERROR: process (pid: 2631953) is no longer running 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2631836 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2631836 00:04:09.573 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:09.832 lslocks: write error 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2631836 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2631836 ']' 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2631836 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.832 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631836 00:04:10.091 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.091 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.091 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631836' 00:04:10.091 killing process with pid 2631836 00:04:10.091 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2631836 00:04:10.091 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2631836 00:04:10.350 00:04:10.350 real 0m1.775s 00:04:10.350 user 0m1.895s 00:04:10.350 sys 0m0.587s 00:04:10.350 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.350 16:15:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 ************************************ 00:04:10.350 END TEST locking_app_on_locked_coremask 00:04:10.350 ************************************ 00:04:10.350 16:15:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:10.350 16:15:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.350 16:15:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.350 16:15:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 ************************************ 00:04:10.350 START TEST locking_overlapped_coremask 00:04:10.350 ************************************ 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2632276 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2632276 /var/tmp/spdk.sock 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2632276 ']' 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.350 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 [2024-11-04 16:15:37.084290] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:10.350 [2024-11-04 16:15:37.084330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632276 ] 00:04:10.350 [2024-11-04 16:15:37.146679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:10.608 [2024-11-04 16:15:37.192033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.609 [2024-11-04 16:15:37.192129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.609 [2024-11-04 16:15:37.192132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2632321 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2632321 /var/tmp/spdk2.sock 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2632321 /var/tmp/spdk2.sock 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2632321 /var/tmp/spdk2.sock 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2632321 ']' 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.609 16:15:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:10.867 [2024-11-04 16:15:37.451703] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:10.867 [2024-11-04 16:15:37.451765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632321 ] 00:04:10.867 [2024-11-04 16:15:37.543966] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2632276 has claimed it. 00:04:10.867 [2024-11-04 16:15:37.544002] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:11.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2632321) - No such process 00:04:11.434 ERROR: process (pid: 2632321) is no longer running 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2632276 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2632276 ']' 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2632276 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632276 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632276' 00:04:11.434 killing process with pid 2632276 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2632276 00:04:11.434 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2632276 00:04:11.693 00:04:11.693 real 0m1.411s 00:04:11.693 user 0m3.923s 00:04:11.693 sys 0m0.381s 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:11.693 ************************************ 00:04:11.693 END TEST locking_overlapped_coremask 00:04:11.693 ************************************ 00:04:11.693 16:15:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:11.693 16:15:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.693 16:15:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.693 16:15:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:11.693 ************************************ 00:04:11.693 START TEST locking_overlapped_coremask_via_rpc 00:04:11.693 ************************************ 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2632579 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2632579 /var/tmp/spdk.sock 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:11.693 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2632579 ']' 00:04:11.952 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.952 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.952 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.952 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.952 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.952 [2024-11-04 16:15:38.566755] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:11.952 [2024-11-04 16:15:38.566799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632579 ] 00:04:11.952 [2024-11-04 16:15:38.629553] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:11.952 [2024-11-04 16:15:38.629578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:11.952 [2024-11-04 16:15:38.670310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.952 [2024-11-04 16:15:38.670410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.952 [2024-11-04 16:15:38.670410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2632595 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2632595 /var/tmp/spdk2.sock 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2632595 ']' 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.210 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.210 [2024-11-04 16:15:38.927336] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:12.210 [2024-11-04 16:15:38.927385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632595 ] 00:04:12.210 [2024-11-04 16:15:39.017943] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:12.211 [2024-11-04 16:15:39.017976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:12.469 [2024-11-04 16:15:39.104953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:12.469 [2024-11-04 16:15:39.105070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.469 [2024-11-04 16:15:39.105071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.036 [2024-11-04 16:15:39.777674] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2632579 has claimed it. 00:04:13.036 request: 00:04:13.036 { 00:04:13.036 "method": "framework_enable_cpumask_locks", 00:04:13.036 "req_id": 1 00:04:13.036 } 00:04:13.036 Got JSON-RPC error response 00:04:13.036 response: 00:04:13.036 { 00:04:13.036 "code": -32603, 00:04:13.036 "message": "Failed to claim CPU core: 2" 00:04:13.036 } 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.036 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2632579 /var/tmp/spdk.sock 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2632579 ']' 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.037 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2632595 /var/tmp/spdk2.sock 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2632595 ']' 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:13.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.294 16:15:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:13.553 00:04:13.553 real 0m1.678s 00:04:13.553 user 0m0.808s 00:04:13.553 sys 0m0.143s 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.553 16:15:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.553 ************************************ 00:04:13.553 END TEST locking_overlapped_coremask_via_rpc 00:04:13.553 ************************************ 00:04:13.553 16:15:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:13.553 16:15:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2632579 ]] 00:04:13.553 16:15:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2632579 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2632579 ']' 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2632579 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632579 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632579' 00:04:13.553 killing process with pid 2632579 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2632579 00:04:13.553 16:15:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2632579 00:04:13.811 16:15:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2632595 ]] 00:04:13.811 16:15:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2632595 00:04:13.811 16:15:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2632595 ']' 00:04:13.811 16:15:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2632595 00:04:13.811 16:15:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:13.811 16:15:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.811 16:15:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632595 00:04:14.070 16:15:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:14.070 16:15:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:14.070 16:15:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632595' 00:04:14.070 killing process with pid 2632595 00:04:14.070 16:15:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2632595 00:04:14.070 16:15:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2632595 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2632579 ]] 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2632579 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2632579 ']' 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2632579 00:04:14.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2632579) - No such process 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2632579 is not found' 00:04:14.329 Process with pid 2632579 is not found 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2632595 ]] 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2632595 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2632595 ']' 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2632595 00:04:14.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2632595) - No such process 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2632595 is not found' 00:04:14.329 Process with pid 2632595 is not found 00:04:14.329 16:15:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:14.329 00:04:14.329 real 0m13.626s 00:04:14.329 user 0m23.971s 00:04:14.329 sys 0m4.712s 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.329 16:15:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:14.329 ************************************ 00:04:14.329 END TEST cpu_locks 00:04:14.329 ************************************ 00:04:14.329 00:04:14.329 real 0m37.526s 00:04:14.329 user 1m11.378s 00:04:14.329 sys 0m8.105s 00:04:14.329 16:15:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.329 16:15:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.329 ************************************ 00:04:14.329 END TEST event 00:04:14.329 ************************************ 00:04:14.329 16:15:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:14.329 16:15:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.329 16:15:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.329 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:04:14.329 ************************************ 00:04:14.329 START TEST thread 00:04:14.329 ************************************ 00:04:14.329 16:15:41 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:14.329 * Looking for test storage... 00:04:14.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:14.329 16:15:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.329 16:15:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.329 16:15:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.587 16:15:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.587 16:15:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.587 16:15:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.587 16:15:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.587 16:15:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.588 16:15:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.588 16:15:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.588 16:15:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.588 16:15:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.588 16:15:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.588 16:15:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.588 16:15:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.588 16:15:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:14.588 16:15:41 thread -- scripts/common.sh@345 -- # : 1 00:04:14.588 16:15:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.588 16:15:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.588 16:15:41 thread -- scripts/common.sh@365 -- # decimal 1 00:04:14.588 16:15:41 thread -- scripts/common.sh@353 -- # local d=1 00:04:14.588 16:15:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.588 16:15:41 thread -- scripts/common.sh@355 -- # echo 1 00:04:14.588 16:15:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.588 16:15:41 thread -- scripts/common.sh@366 -- # decimal 2 00:04:14.588 16:15:41 thread -- scripts/common.sh@353 -- # local d=2 00:04:14.588 16:15:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.588 16:15:41 thread -- scripts/common.sh@355 -- # echo 2 00:04:14.588 16:15:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.588 16:15:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.588 16:15:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.588 16:15:41 thread -- scripts/common.sh@368 -- # return 0 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.588 --rc genhtml_branch_coverage=1 00:04:14.588 --rc genhtml_function_coverage=1 00:04:14.588 --rc genhtml_legend=1 00:04:14.588 --rc geninfo_all_blocks=1 00:04:14.588 --rc geninfo_unexecuted_blocks=1 00:04:14.588 00:04:14.588 ' 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.588 --rc genhtml_branch_coverage=1 00:04:14.588 --rc genhtml_function_coverage=1 00:04:14.588 --rc genhtml_legend=1 00:04:14.588 --rc geninfo_all_blocks=1 00:04:14.588 --rc geninfo_unexecuted_blocks=1 00:04:14.588 00:04:14.588 ' 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.588 --rc genhtml_branch_coverage=1 00:04:14.588 --rc genhtml_function_coverage=1 00:04:14.588 --rc genhtml_legend=1 00:04:14.588 --rc geninfo_all_blocks=1 00:04:14.588 --rc geninfo_unexecuted_blocks=1 00:04:14.588 00:04:14.588 ' 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.588 --rc genhtml_branch_coverage=1 00:04:14.588 --rc genhtml_function_coverage=1 00:04:14.588 --rc genhtml_legend=1 00:04:14.588 --rc geninfo_all_blocks=1 00:04:14.588 --rc geninfo_unexecuted_blocks=1 00:04:14.588 00:04:14.588 ' 00:04:14.588 16:15:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.588 16:15:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.588 ************************************ 00:04:14.588 START TEST thread_poller_perf 00:04:14.588 ************************************ 00:04:14.588 16:15:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:14.588 [2024-11-04 16:15:41.268189] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:14.588 [2024-11-04 16:15:41.268259] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633156 ] 00:04:14.588 [2024-11-04 16:15:41.335966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.588 [2024-11-04 16:15:41.375454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.588 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:15.963 [2024-11-04T15:15:42.787Z] ====================================== 00:04:15.963 [2024-11-04T15:15:42.787Z] busy:2106529196 (cyc) 00:04:15.963 [2024-11-04T15:15:42.787Z] total_run_count: 425000 00:04:15.963 [2024-11-04T15:15:42.787Z] tsc_hz: 2100000000 (cyc) 00:04:15.963 [2024-11-04T15:15:42.787Z] ====================================== 00:04:15.963 [2024-11-04T15:15:42.787Z] poller_cost: 4956 (cyc), 2360 (nsec) 00:04:15.963 00:04:15.963 real 0m1.172s 00:04:15.963 user 0m1.099s 00:04:15.963 sys 0m0.069s 00:04:15.963 16:15:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.963 16:15:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.963 ************************************ 00:04:15.963 END TEST thread_poller_perf 00:04:15.963 ************************************ 00:04:15.963 16:15:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:15.963 16:15:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:15.963 16:15:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.963 16:15:42 thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.963 ************************************ 00:04:15.963 START TEST thread_poller_perf 00:04:15.963 ************************************ 00:04:15.963 16:15:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:15.963 [2024-11-04 16:15:42.511616] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:15.963 [2024-11-04 16:15:42.511687] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633368 ] 00:04:15.963 [2024-11-04 16:15:42.577318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.963 [2024-11-04 16:15:42.616402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.963 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:16.898 [2024-11-04T15:15:43.722Z] ====================================== 00:04:16.898 [2024-11-04T15:15:43.722Z] busy:2101499102 (cyc) 00:04:16.898 [2024-11-04T15:15:43.722Z] total_run_count: 5585000 00:04:16.898 [2024-11-04T15:15:43.722Z] tsc_hz: 2100000000 (cyc) 00:04:16.898 [2024-11-04T15:15:43.722Z] ====================================== 00:04:16.898 [2024-11-04T15:15:43.722Z] poller_cost: 376 (cyc), 179 (nsec) 00:04:16.898 00:04:16.898 real 0m1.166s 00:04:16.898 user 0m1.098s 00:04:16.898 sys 0m0.065s 00:04:16.898 16:15:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.898 16:15:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.898 ************************************ 00:04:16.898 END TEST thread_poller_perf 00:04:16.898 ************************************ 00:04:16.898 16:15:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:16.898 00:04:16.898 real 0m2.646s 00:04:16.898 user 0m2.348s 00:04:16.898 sys 0m0.312s 00:04:16.898 16:15:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.898 16:15:43 thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.898 ************************************ 00:04:16.898 END TEST thread 00:04:16.898 ************************************ 00:04:17.156 16:15:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:17.156 16:15:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:17.156 16:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.156 16:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.156 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.156 ************************************ 00:04:17.156 START TEST app_cmdline 00:04:17.156 ************************************ 00:04:17.156 16:15:43 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:17.156 * Looking for test storage... 00:04:17.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:17.156 16:15:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.156 16:15:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.156 16:15:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.156 16:15:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.156 16:15:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.157 16:15:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.157 --rc genhtml_branch_coverage=1 00:04:17.157 --rc genhtml_function_coverage=1 00:04:17.157 --rc genhtml_legend=1 00:04:17.157 --rc geninfo_all_blocks=1 00:04:17.157 --rc geninfo_unexecuted_blocks=1 00:04:17.157 00:04:17.157 ' 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.157 --rc genhtml_branch_coverage=1 00:04:17.157 --rc genhtml_function_coverage=1 00:04:17.157 --rc genhtml_legend=1 00:04:17.157 --rc geninfo_all_blocks=1 00:04:17.157 --rc geninfo_unexecuted_blocks=1 00:04:17.157 00:04:17.157 ' 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.157 --rc genhtml_branch_coverage=1 00:04:17.157 --rc genhtml_function_coverage=1 00:04:17.157 --rc genhtml_legend=1 00:04:17.157 --rc geninfo_all_blocks=1 00:04:17.157 --rc geninfo_unexecuted_blocks=1 00:04:17.157 00:04:17.157 ' 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.157 --rc genhtml_branch_coverage=1 00:04:17.157 --rc genhtml_function_coverage=1 00:04:17.157 --rc genhtml_legend=1 00:04:17.157 --rc geninfo_all_blocks=1 00:04:17.157 --rc geninfo_unexecuted_blocks=1 00:04:17.157 00:04:17.157 ' 00:04:17.157 16:15:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:17.157 16:15:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2633700 00:04:17.157 16:15:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2633700 00:04:17.157 16:15:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2633700 ']' 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.157 16:15:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:17.157 [2024-11-04 16:15:43.964787] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:17.157 [2024-11-04 16:15:43.964838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633700 ] 00:04:17.415 [2024-11-04 16:15:44.028209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.415 [2024-11-04 16:15:44.070328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:17.675 { 00:04:17.675 "version": "SPDK v25.01-pre git sha1 018f47196", 00:04:17.675 "fields": { 00:04:17.675 "major": 25, 00:04:17.675 "minor": 1, 00:04:17.675 "patch": 0, 00:04:17.675 "suffix": "-pre", 00:04:17.675 "commit": "018f47196" 00:04:17.675 } 00:04:17.675 } 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:17.675 16:15:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:17.675 16:15:44 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:17.933 request: 00:04:17.933 { 00:04:17.933 "method": "env_dpdk_get_mem_stats", 00:04:17.933 "req_id": 1 00:04:17.933 } 00:04:17.933 Got JSON-RPC error response 00:04:17.933 response: 00:04:17.933 { 00:04:17.933 "code": -32601, 00:04:17.933 "message": "Method not found" 00:04:17.933 } 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.933 16:15:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2633700 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2633700 ']' 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2633700 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.933 16:15:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633700 00:04:17.934 16:15:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.934 16:15:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.934 16:15:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633700' 00:04:17.934 killing process with pid 2633700 00:04:17.934 16:15:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 2633700 00:04:17.934 16:15:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 2633700 00:04:18.192 00:04:18.192 real 0m1.248s 00:04:18.192 user 0m1.441s 00:04:18.192 sys 0m0.423s 00:04:18.192 16:15:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.192 16:15:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:18.192 ************************************ 00:04:18.192 END TEST app_cmdline 00:04:18.192 ************************************ 00:04:18.451 16:15:45 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:18.451 16:15:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.451 16:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.451 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.451 ************************************ 00:04:18.451 START TEST version 00:04:18.451 ************************************ 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:18.451 * Looking for test storage... 00:04:18.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.451 16:15:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.451 16:15:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.451 16:15:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.451 16:15:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.451 16:15:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.451 16:15:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.451 16:15:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.451 16:15:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.451 16:15:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.451 16:15:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.451 16:15:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.451 16:15:45 version -- scripts/common.sh@344 -- # case "$op" in 00:04:18.451 16:15:45 version -- scripts/common.sh@345 -- # : 1 00:04:18.451 16:15:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.451 16:15:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.451 16:15:45 version -- scripts/common.sh@365 -- # decimal 1 00:04:18.451 16:15:45 version -- scripts/common.sh@353 -- # local d=1 00:04:18.451 16:15:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.451 16:15:45 version -- scripts/common.sh@355 -- # echo 1 00:04:18.451 16:15:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.451 16:15:45 version -- scripts/common.sh@366 -- # decimal 2 00:04:18.451 16:15:45 version -- scripts/common.sh@353 -- # local d=2 00:04:18.451 16:15:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.451 16:15:45 version -- scripts/common.sh@355 -- # echo 2 00:04:18.451 16:15:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.451 16:15:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.451 16:15:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.451 16:15:45 version -- scripts/common.sh@368 -- # return 0 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.451 --rc genhtml_branch_coverage=1 00:04:18.451 --rc genhtml_function_coverage=1 00:04:18.451 --rc genhtml_legend=1 00:04:18.451 --rc geninfo_all_blocks=1 00:04:18.451 --rc geninfo_unexecuted_blocks=1 00:04:18.451 00:04:18.451 ' 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.451 --rc genhtml_branch_coverage=1 00:04:18.451 --rc genhtml_function_coverage=1 00:04:18.451 --rc genhtml_legend=1 00:04:18.451 --rc geninfo_all_blocks=1 00:04:18.451 --rc geninfo_unexecuted_blocks=1 00:04:18.451 00:04:18.451 ' 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.451 --rc genhtml_branch_coverage=1 00:04:18.451 --rc genhtml_function_coverage=1 00:04:18.451 --rc genhtml_legend=1 00:04:18.451 --rc geninfo_all_blocks=1 00:04:18.451 --rc geninfo_unexecuted_blocks=1 00:04:18.451 00:04:18.451 ' 00:04:18.451 16:15:45 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.451 --rc genhtml_branch_coverage=1 00:04:18.451 --rc genhtml_function_coverage=1 00:04:18.451 --rc genhtml_legend=1 00:04:18.451 --rc geninfo_all_blocks=1 00:04:18.451 --rc geninfo_unexecuted_blocks=1 00:04:18.451 00:04:18.451 ' 00:04:18.451 16:15:45 version -- app/version.sh@17 -- # get_header_version major 00:04:18.451 16:15:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # cut -f2 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # tr -d '"' 00:04:18.451 16:15:45 version -- app/version.sh@17 -- # major=25 00:04:18.451 16:15:45 version -- app/version.sh@18 -- # get_header_version minor 00:04:18.451 16:15:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # cut -f2 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # tr -d '"' 00:04:18.451 16:15:45 version -- app/version.sh@18 -- # minor=1 00:04:18.451 16:15:45 version -- app/version.sh@19 -- # get_header_version patch 00:04:18.451 16:15:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # cut -f2 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # tr -d '"' 00:04:18.451 16:15:45 version -- app/version.sh@19 -- # patch=0 00:04:18.451 16:15:45 version -- app/version.sh@20 -- # get_header_version suffix 00:04:18.451 16:15:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # cut -f2 00:04:18.451 16:15:45 version -- app/version.sh@14 -- # tr -d '"' 00:04:18.451 16:15:45 version -- app/version.sh@20 -- # suffix=-pre 00:04:18.452 16:15:45 version -- app/version.sh@22 -- # version=25.1 00:04:18.452 16:15:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:18.452 16:15:45 version -- app/version.sh@28 -- # version=25.1rc0 00:04:18.452 16:15:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:18.452 16:15:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:18.710 16:15:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:18.710 16:15:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:18.710 00:04:18.710 real 0m0.236s 00:04:18.710 user 0m0.138s 00:04:18.710 sys 0m0.137s 00:04:18.710 16:15:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.710 16:15:45 version -- common/autotest_common.sh@10 -- # set +x 00:04:18.710 ************************************ 00:04:18.710 END TEST version 00:04:18.710 ************************************ 00:04:18.710 16:15:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:18.710 16:15:45 -- spdk/autotest.sh@194 -- # uname -s 00:04:18.710 16:15:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:18.710 16:15:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:18.710 16:15:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:18.710 16:15:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:18.710 16:15:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.710 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.710 16:15:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:18.710 16:15:45 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:18.710 16:15:45 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:18.710 16:15:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:18.710 16:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.710 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.710 ************************************ 00:04:18.710 START TEST nvmf_tcp 00:04:18.710 ************************************ 00:04:18.710 16:15:45 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:18.710 * Looking for test storage... 00:04:18.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:18.710 16:15:45 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.710 16:15:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.710 16:15:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.969 16:15:45 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 16:15:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:18.969 16:15:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:18.969 16:15:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.969 16:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.969 ************************************ 00:04:18.969 START TEST nvmf_target_core 00:04:18.969 ************************************ 00:04:18.969 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:18.969 * Looking for test storage... 00:04:18.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:18.969 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.969 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.969 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.228 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.229 --rc genhtml_branch_coverage=1 00:04:19.229 --rc genhtml_function_coverage=1 00:04:19.229 --rc genhtml_legend=1 00:04:19.229 --rc geninfo_all_blocks=1 00:04:19.229 --rc geninfo_unexecuted_blocks=1 00:04:19.229 00:04:19.229 ' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.229 --rc genhtml_branch_coverage=1 00:04:19.229 --rc genhtml_function_coverage=1 00:04:19.229 --rc genhtml_legend=1 00:04:19.229 --rc geninfo_all_blocks=1 00:04:19.229 --rc geninfo_unexecuted_blocks=1 00:04:19.229 00:04:19.229 ' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.229 --rc genhtml_branch_coverage=1 00:04:19.229 --rc genhtml_function_coverage=1 00:04:19.229 --rc genhtml_legend=1 00:04:19.229 --rc geninfo_all_blocks=1 00:04:19.229 --rc geninfo_unexecuted_blocks=1 00:04:19.229 00:04:19.229 ' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.229 --rc genhtml_branch_coverage=1 00:04:19.229 --rc genhtml_function_coverage=1 00:04:19.229 --rc genhtml_legend=1 00:04:19.229 --rc geninfo_all_blocks=1 00:04:19.229 --rc geninfo_unexecuted_blocks=1 00:04:19.229 00:04:19.229 ' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.229 16:15:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:19.229 ************************************ 00:04:19.229 START TEST nvmf_abort 00:04:19.230 ************************************ 00:04:19.230 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:19.230 * Looking for test storage... 00:04:19.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:19.230 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.230 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.230 16:15:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.230 --rc genhtml_branch_coverage=1 00:04:19.230 --rc genhtml_function_coverage=1 00:04:19.230 --rc genhtml_legend=1 00:04:19.230 --rc geninfo_all_blocks=1 00:04:19.230 --rc geninfo_unexecuted_blocks=1 00:04:19.230 00:04:19.230 ' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.230 --rc genhtml_branch_coverage=1 00:04:19.230 --rc genhtml_function_coverage=1 00:04:19.230 --rc genhtml_legend=1 00:04:19.230 --rc geninfo_all_blocks=1 00:04:19.230 --rc geninfo_unexecuted_blocks=1 00:04:19.230 00:04:19.230 ' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.230 --rc genhtml_branch_coverage=1 00:04:19.230 --rc genhtml_function_coverage=1 00:04:19.230 --rc genhtml_legend=1 00:04:19.230 --rc geninfo_all_blocks=1 00:04:19.230 --rc geninfo_unexecuted_blocks=1 00:04:19.230 00:04:19.230 ' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.230 --rc genhtml_branch_coverage=1 00:04:19.230 --rc genhtml_function_coverage=1 00:04:19.230 --rc genhtml_legend=1 00:04:19.230 --rc geninfo_all_blocks=1 00:04:19.230 --rc geninfo_unexecuted_blocks=1 00:04:19.230 00:04:19.230 ' 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.230 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.489 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:19.490 16:15:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:26.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:26.054 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:26.054 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:26.055 Found net devices under 0000:86:00.0: cvl_0_0 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:26.055 Found net devices under 0000:86:00.1: cvl_0_1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:26.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:26.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:04:26.055 00:04:26.055 --- 10.0.0.2 ping statistics --- 00:04:26.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:26.055 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:26.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:26.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:04:26.055 00:04:26.055 --- 10.0.0.1 ping statistics --- 00:04:26.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:26.055 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2637223 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2637223 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2637223 ']' 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.055 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 [2024-11-04 16:15:52.007861] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:26.055 [2024-11-04 16:15:52.007908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:26.055 [2024-11-04 16:15:52.077834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:26.055 [2024-11-04 16:15:52.123484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:26.055 [2024-11-04 16:15:52.123520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:26.055 [2024-11-04 16:15:52.123527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.055 [2024-11-04 16:15:52.123533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.055 [2024-11-04 16:15:52.123537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:26.055 [2024-11-04 16:15:52.124863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:26.055 [2024-11-04 16:15:52.124960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.055 [2024-11-04 16:15:52.124962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 [2024-11-04 16:15:52.261036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 Malloc0 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.055 Delay0 00:04:26.055 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.056 [2024-11-04 16:15:52.340273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.056 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:26.056 [2024-11-04 16:15:52.467272] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:27.954 Initializing NVMe Controllers 00:04:27.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:27.954 controller IO queue size 128 less than required 00:04:27.954 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:27.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:27.954 Initialization complete. Launching workers. 00:04:27.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 37715 00:04:27.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37777, failed to submit 62 00:04:27.954 success 37719, unsuccessful 58, failed 0 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:27.954 rmmod nvme_tcp 00:04:27.954 rmmod nvme_fabrics 00:04:27.954 rmmod nvme_keyring 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2637223 ']' 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2637223 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2637223 ']' 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2637223 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637223 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637223' 00:04:27.954 killing process with pid 2637223 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2637223 00:04:27.954 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2637223 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:28.213 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:30.116 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:30.116 00:04:30.116 real 0m11.036s 00:04:30.116 user 0m11.492s 00:04:30.116 sys 0m5.357s 00:04:30.116 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.116 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:30.116 ************************************ 00:04:30.116 END TEST nvmf_abort 00:04:30.116 ************************************ 00:04:30.374 16:15:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:30.374 16:15:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:30.374 16:15:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.374 16:15:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:30.374 ************************************ 00:04:30.374 START TEST nvmf_ns_hotplug_stress 00:04:30.374 ************************************ 00:04:30.374 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:30.374 * Looking for test storage... 00:04:30.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.374 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.375 --rc genhtml_branch_coverage=1 00:04:30.375 --rc genhtml_function_coverage=1 00:04:30.375 --rc genhtml_legend=1 00:04:30.375 --rc geninfo_all_blocks=1 00:04:30.375 --rc geninfo_unexecuted_blocks=1 00:04:30.375 00:04:30.375 ' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.375 --rc genhtml_branch_coverage=1 00:04:30.375 --rc genhtml_function_coverage=1 00:04:30.375 --rc genhtml_legend=1 00:04:30.375 --rc geninfo_all_blocks=1 00:04:30.375 --rc geninfo_unexecuted_blocks=1 00:04:30.375 00:04:30.375 ' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.375 --rc genhtml_branch_coverage=1 00:04:30.375 --rc genhtml_function_coverage=1 00:04:30.375 --rc genhtml_legend=1 00:04:30.375 --rc geninfo_all_blocks=1 00:04:30.375 --rc geninfo_unexecuted_blocks=1 00:04:30.375 00:04:30.375 ' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.375 --rc genhtml_branch_coverage=1 00:04:30.375 --rc genhtml_function_coverage=1 00:04:30.375 --rc genhtml_legend=1 00:04:30.375 --rc geninfo_all_blocks=1 00:04:30.375 --rc geninfo_unexecuted_blocks=1 00:04:30.375 00:04:30.375 ' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:30.375 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:35.642 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:35.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:35.643 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:35.643 Found net devices under 0000:86:00.0: cvl_0_0 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:35.643 Found net devices under 0000:86:00.1: cvl_0_1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:35.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:35.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:04:35.643 00:04:35.643 --- 10.0.0.2 ping statistics --- 00:04:35.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:35.643 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:04:35.643 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:35.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:35.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:04:35.644 00:04:35.644 --- 10.0.0.1 ping statistics --- 00:04:35.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:35.644 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2641174 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2641174 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2641174 ']' 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:35.644 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:35.644 [2024-11-04 16:16:02.381981] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:04:35.644 [2024-11-04 16:16:02.382024] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:35.644 [2024-11-04 16:16:02.449846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:35.902 [2024-11-04 16:16:02.492270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:35.902 [2024-11-04 16:16:02.492301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:35.902 [2024-11-04 16:16:02.492307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:35.902 [2024-11-04 16:16:02.492313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:35.903 [2024-11-04 16:16:02.492318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:35.903 [2024-11-04 16:16:02.493706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.903 [2024-11-04 16:16:02.493791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.903 [2024-11-04 16:16:02.493792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:35.903 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:36.161 [2024-11-04 16:16:02.790146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.161 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:36.419 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:36.419 [2024-11-04 16:16:03.199611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:36.419 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:36.677 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:36.936 Malloc0 00:04:36.936 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:37.194 Delay0 00:04:37.194 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:37.194 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:37.452 NULL1 00:04:37.452 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:37.711 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2641523 00:04:37.711 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:37.711 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:37.711 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:39.088 Read completed with error (sct=0, sc=11) 00:04:39.088 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:39.088 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:39.088 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:39.346 true 00:04:39.346 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:39.346 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:40.281 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:40.281 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:40.281 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:40.538 true 00:04:40.538 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:40.538 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:40.795 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:40.795 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:40.795 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:41.053 true 00:04:41.053 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:41.053 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:41.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:41.989 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:41.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:42.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:42.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:42.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:42.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:42.247 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:04:42.247 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:04:42.505 true 00:04:42.505 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:42.505 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:43.441 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:43.441 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:04:43.441 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:04:43.699 true 00:04:43.699 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:43.699 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:43.958 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:43.958 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:04:43.958 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:04:44.289 true 00:04:44.289 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:44.289 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:45.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:45.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:45.490 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:04:45.490 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:04:45.750 true 00:04:45.750 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:45.750 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:46.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:46.686 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:46.686 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:04:46.686 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:04:46.944 true 00:04:46.944 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:46.944 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:47.204 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:47.463 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:04:47.463 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:04:47.463 true 00:04:47.722 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:47.722 16:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:48.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.659 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:48.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.917 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:04:48.917 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:04:49.176 true 00:04:49.176 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:49.176 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:50.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:50.111 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:50.111 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:04:50.111 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:04:50.370 true 00:04:50.370 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:50.370 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:50.628 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:50.628 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:04:50.628 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:04:50.888 true 00:04:50.888 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:50.888 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.265 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:04:52.265 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:04:52.524 true 00:04:52.524 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:52.524 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:53.461 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:53.461 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:04:53.461 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:04:53.719 true 00:04:53.719 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:53.719 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:53.977 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:53.977 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:04:53.977 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:04:54.236 true 00:04:54.236 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:54.236 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.612 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:04:55.612 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:04:55.871 true 00:04:55.871 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:55.871 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:56.808 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:56.808 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:04:56.808 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:04:57.067 true 00:04:57.067 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:57.067 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.067 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.325 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:04:57.325 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:04:57.584 true 00:04:57.584 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:57.584 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:58.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.520 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:58.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.779 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:04:58.779 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:04:59.037 true 00:04:59.037 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:04:59.037 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.973 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.973 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:04:59.973 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:00.231 true 00:05:00.231 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:00.231 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.490 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.749 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:00.749 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:00.749 true 00:05:00.749 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:00.749 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.125 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:02.125 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:02.384 true 00:05:02.384 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:02.384 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.323 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.323 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:03.323 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:03.581 true 00:05:03.581 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:03.581 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.840 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.099 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:04.099 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:04.099 true 00:05:04.099 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:04.099 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.478 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:05.478 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:05.736 true 00:05:05.736 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:05.736 16:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.672 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.672 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:06.672 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:06.931 true 00:05:06.931 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:06.931 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.190 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.449 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:07.449 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:07.449 true 00:05:07.449 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:07.449 16:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.827 Initializing NVMe Controllers 00:05:08.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:08.827 Controller IO queue size 128, less than required. 00:05:08.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:08.827 Controller IO queue size 128, less than required. 00:05:08.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:08.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:08.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:08.827 Initialization complete. Launching workers. 00:05:08.827 ======================================================== 00:05:08.827 Latency(us) 00:05:08.827 Device Information : IOPS MiB/s Average min max 00:05:08.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2108.27 1.03 44089.75 2569.78 1080175.08 00:05:08.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18327.93 8.95 6983.64 2482.73 299414.91 00:05:08.827 ======================================================== 00:05:08.827 Total : 20436.20 9.98 10811.63 2482.73 1080175.08 00:05:08.827 00:05:08.827 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.827 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:08.827 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:09.085 true 00:05:09.085 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2641523 00:05:09.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2641523) - No such process 00:05:09.085 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2641523 00:05:09.085 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.344 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:09.344 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:09.344 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:09.344 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:09.344 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:09.345 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:09.603 null0 00:05:09.603 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:09.603 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:09.603 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:09.862 null1 00:05:09.862 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:09.862 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:09.862 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:10.121 null2 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:10.121 null3 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.121 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:10.380 null4 00:05:10.380 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.380 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.380 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:10.639 null5 00:05:10.639 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.639 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.639 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:10.898 null6 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:10.898 null7 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.898 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2647272 2647273 2647274 2647276 2647279 2647280 2647283 2647284 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:10.899 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:11.158 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.418 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:11.677 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:11.937 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.196 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.196 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.196 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.197 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.456 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:12.716 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:12.976 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.977 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:13.236 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:13.236 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:13.236 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:13.236 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:13.236 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:13.237 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.237 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:13.237 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:13.496 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.755 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.014 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:14.015 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:14.274 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:14.274 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.274 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.274 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.274 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:14.275 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.534 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.794 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.053 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:15.312 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:15.313 rmmod nvme_tcp 00:05:15.313 rmmod nvme_fabrics 00:05:15.313 rmmod nvme_keyring 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2641174 ']' 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2641174 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2641174 ']' 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2641174 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641174 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641174' 00:05:15.313 killing process with pid 2641174 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2641174 00:05:15.313 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2641174 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.572 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:17.478 00:05:17.478 real 0m47.238s 00:05:17.478 user 3m14.359s 00:05:17.478 sys 0m14.882s 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:17.478 ************************************ 00:05:17.478 END TEST nvmf_ns_hotplug_stress 00:05:17.478 ************************************ 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.478 ************************************ 00:05:17.478 START TEST nvmf_delete_subsystem 00:05:17.478 ************************************ 00:05:17.478 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:17.737 * Looking for test storage... 00:05:17.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.737 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.738 --rc genhtml_branch_coverage=1 00:05:17.738 --rc genhtml_function_coverage=1 00:05:17.738 --rc genhtml_legend=1 00:05:17.738 --rc geninfo_all_blocks=1 00:05:17.738 --rc geninfo_unexecuted_blocks=1 00:05:17.738 00:05:17.738 ' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.738 --rc genhtml_branch_coverage=1 00:05:17.738 --rc genhtml_function_coverage=1 00:05:17.738 --rc genhtml_legend=1 00:05:17.738 --rc geninfo_all_blocks=1 00:05:17.738 --rc geninfo_unexecuted_blocks=1 00:05:17.738 00:05:17.738 ' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.738 --rc genhtml_branch_coverage=1 00:05:17.738 --rc genhtml_function_coverage=1 00:05:17.738 --rc genhtml_legend=1 00:05:17.738 --rc geninfo_all_blocks=1 00:05:17.738 --rc geninfo_unexecuted_blocks=1 00:05:17.738 00:05:17.738 ' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.738 --rc genhtml_branch_coverage=1 00:05:17.738 --rc genhtml_function_coverage=1 00:05:17.738 --rc genhtml_legend=1 00:05:17.738 --rc geninfo_all_blocks=1 00:05:17.738 --rc geninfo_unexecuted_blocks=1 00:05:17.738 00:05:17.738 ' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.738 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.739 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:23.013 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:23.013 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.013 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:23.014 Found net devices under 0000:86:00.0: cvl_0_0 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:23.014 Found net devices under 0000:86:00.1: cvl_0_1 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:23.014 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:23.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:23.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:05:23.273 00:05:23.273 --- 10.0.0.2 ping statistics --- 00:05:23.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.273 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:23.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:23.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:05:23.273 00:05:23.273 --- 10.0.0.1 ping statistics --- 00:05:23.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.273 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:23.273 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:23.273 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:23.273 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:23.273 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2651663 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2651663 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2651663 ']' 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.274 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.274 [2024-11-04 16:16:50.097715] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:05:23.274 [2024-11-04 16:16:50.097770] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:23.533 [2024-11-04 16:16:50.165466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.533 [2024-11-04 16:16:50.207177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:23.533 [2024-11-04 16:16:50.207215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:23.533 [2024-11-04 16:16:50.207223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.533 [2024-11-04 16:16:50.207229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.533 [2024-11-04 16:16:50.207234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:23.533 [2024-11-04 16:16:50.208403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.533 [2024-11-04 16:16:50.208407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.533 [2024-11-04 16:16:50.344634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.533 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.792 [2024-11-04 16:16:50.360808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.792 NULL1 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.792 Delay0 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2651686 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:23.792 16:16:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:23.792 [2024-11-04 16:16:50.445522] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:25.696 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:25.696 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.696 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 starting I/O failed: -6 00:05:25.955 Write completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.955 Read completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 starting I/O failed: -6 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 [2024-11-04 16:16:52.526794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f335c000c40 is same with the state(6) to be set 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Write completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:25.956 Read completed with error (sct=0, sc=8) 00:05:26.894 [2024-11-04 16:16:53.499087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14499a0 is same with the state(6) to be set 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 [2024-11-04 16:16:53.528941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14482c0 is same with the state(6) to be set 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 [2024-11-04 16:16:53.529105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448860 is same with the state(6) to be set 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 [2024-11-04 16:16:53.529202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f335c00d350 is same with the state(6) to be set 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Write completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 Read completed with error (sct=0, sc=8) 00:05:26.894 [2024-11-04 16:16:53.529794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14484a0 is same with the state(6) to be set 00:05:26.894 Initializing NVMe Controllers 00:05:26.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:26.894 Controller IO queue size 128, less than required. 00:05:26.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:26.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:26.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:26.894 Initialization complete. Launching workers. 00:05:26.894 ======================================================== 00:05:26.894 Latency(us) 00:05:26.894 Device Information : IOPS MiB/s Average min max 00:05:26.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.60 0.10 945755.61 438.16 1011285.08 00:05:26.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.86 0.08 867154.52 264.68 1011628.55 00:05:26.894 ======================================================== 00:05:26.894 Total : 352.46 0.17 910551.17 264.68 1011628.55 00:05:26.894 00:05:26.894 [2024-11-04 16:16:53.530507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14499a0 (9): Bad file descriptor 00:05:26.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:26.894 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.894 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:26.895 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2651686 00:05:26.895 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2651686 00:05:27.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2651686) - No such process 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2651686 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2651686 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2651686 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 [2024-11-04 16:16:54.062490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2652377 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:27.463 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:27.463 [2024-11-04 16:16:54.128067] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:28.031 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:28.031 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:28.031 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:28.352 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:28.352 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:28.352 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:28.964 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:28.964 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:28.964 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:29.532 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:29.532 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:29.532 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:29.791 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:29.791 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:29.791 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:30.358 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:30.358 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:30.358 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:30.616 Initializing NVMe Controllers 00:05:30.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:30.616 Controller IO queue size 128, less than required. 00:05:30.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:30.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:30.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:30.616 Initialization complete. Launching workers. 00:05:30.616 ======================================================== 00:05:30.616 Latency(us) 00:05:30.616 Device Information : IOPS MiB/s Average min max 00:05:30.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003324.94 1000131.74 1010218.93 00:05:30.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005277.17 1000185.84 1041834.50 00:05:30.616 ======================================================== 00:05:30.616 Total : 256.00 0.12 1004301.05 1000131.74 1041834.50 00:05:30.616 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2652377 00:05:30.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2652377) - No such process 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2652377 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:30.873 rmmod nvme_tcp 00:05:30.873 rmmod nvme_fabrics 00:05:30.873 rmmod nvme_keyring 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:30.873 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2651663 ']' 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2651663 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2651663 ']' 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2651663 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.874 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2651663 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2651663' 00:05:31.132 killing process with pid 2651663 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2651663 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2651663 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.132 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.670 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:33.670 00:05:33.670 real 0m15.676s 00:05:33.670 user 0m28.782s 00:05:33.670 sys 0m5.237s 00:05:33.670 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.670 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 ************************************ 00:05:33.670 END TEST nvmf_delete_subsystem 00:05:33.670 ************************************ 00:05:33.670 16:17:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:33.670 16:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:33.670 16:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.670 16:17:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 ************************************ 00:05:33.670 START TEST nvmf_host_management 00:05:33.670 ************************************ 00:05:33.670 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:33.670 * Looking for test storage... 00:05:33.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.671 --rc genhtml_branch_coverage=1 00:05:33.671 --rc genhtml_function_coverage=1 00:05:33.671 --rc genhtml_legend=1 00:05:33.671 --rc geninfo_all_blocks=1 00:05:33.671 --rc geninfo_unexecuted_blocks=1 00:05:33.671 00:05:33.671 ' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.671 --rc genhtml_branch_coverage=1 00:05:33.671 --rc genhtml_function_coverage=1 00:05:33.671 --rc genhtml_legend=1 00:05:33.671 --rc geninfo_all_blocks=1 00:05:33.671 --rc geninfo_unexecuted_blocks=1 00:05:33.671 00:05:33.671 ' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.671 --rc genhtml_branch_coverage=1 00:05:33.671 --rc genhtml_function_coverage=1 00:05:33.671 --rc genhtml_legend=1 00:05:33.671 --rc geninfo_all_blocks=1 00:05:33.671 --rc geninfo_unexecuted_blocks=1 00:05:33.671 00:05:33.671 ' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.671 --rc genhtml_branch_coverage=1 00:05:33.671 --rc genhtml_function_coverage=1 00:05:33.671 --rc genhtml_legend=1 00:05:33.671 --rc geninfo_all_blocks=1 00:05:33.671 --rc geninfo_unexecuted_blocks=1 00:05:33.671 00:05:33.671 ' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:33.671 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:38.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:38.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:38.950 Found net devices under 0000:86:00.0: cvl_0_0 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:38.950 Found net devices under 0000:86:00.1: cvl_0_1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:38.950 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:38.951 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:38.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:38.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:05:38.951 00:05:38.951 --- 10.0.0.2 ping statistics --- 00:05:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:38.951 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:05:38.951 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:05:39.210 00:05:39.210 --- 10.0.0.1 ping statistics --- 00:05:39.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.210 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2656400 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2656400 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2656400 ']' 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.210 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.210 [2024-11-04 16:17:05.866625] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:05:39.210 [2024-11-04 16:17:05.866671] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.210 [2024-11-04 16:17:05.935279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.210 [2024-11-04 16:17:05.978727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.210 [2024-11-04 16:17:05.978763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.210 [2024-11-04 16:17:05.978772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.210 [2024-11-04 16:17:05.978779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.210 [2024-11-04 16:17:05.978784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.210 [2024-11-04 16:17:05.980272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.210 [2024-11-04 16:17:05.980357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.210 [2024-11-04 16:17:05.980480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.210 [2024-11-04 16:17:05.980480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.469 [2024-11-04 16:17:06.124624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.469 Malloc0 00:05:39.469 [2024-11-04 16:17:06.196195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2656602 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2656602 /var/tmp/bdevperf.sock 00:05:39.469 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2656602 ']' 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:39.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:39.470 { 00:05:39.470 "params": { 00:05:39.470 "name": "Nvme$subsystem", 00:05:39.470 "trtype": "$TEST_TRANSPORT", 00:05:39.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:39.470 "adrfam": "ipv4", 00:05:39.470 "trsvcid": "$NVMF_PORT", 00:05:39.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:39.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:39.470 "hdgst": ${hdgst:-false}, 00:05:39.470 "ddgst": ${ddgst:-false} 00:05:39.470 }, 00:05:39.470 "method": "bdev_nvme_attach_controller" 00:05:39.470 } 00:05:39.470 EOF 00:05:39.470 )") 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:39.470 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:39.470 "params": { 00:05:39.470 "name": "Nvme0", 00:05:39.470 "trtype": "tcp", 00:05:39.470 "traddr": "10.0.0.2", 00:05:39.470 "adrfam": "ipv4", 00:05:39.470 "trsvcid": "4420", 00:05:39.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:39.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:39.470 "hdgst": false, 00:05:39.470 "ddgst": false 00:05:39.470 }, 00:05:39.470 "method": "bdev_nvme_attach_controller" 00:05:39.470 }' 00:05:39.470 [2024-11-04 16:17:06.291800] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:05:39.470 [2024-11-04 16:17:06.291845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2656602 ] 00:05:39.729 [2024-11-04 16:17:06.356992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.729 [2024-11-04 16:17:06.397954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.988 Running I/O for 10 seconds... 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:05:39.988 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=82 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 82 -ge 100 ']' 00:05:39.989 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.248 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.509 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.509 [2024-11-04 16:17:07.087325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.509 [2024-11-04 16:17:07.087549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc200 is same with the state(6) to be set 00:05:40.510 [2024-11-04 16:17:07.087800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.087988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.087996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.510 [2024-11-04 16:17:07.088250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.510 [2024-11-04 16:17:07.088258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:40.511 [2024-11-04 16:17:07.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:40.511 [2024-11-04 16:17:07.088771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:05:40.511 [2024-11-04 16:17:07.089694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:05:40.511 task offset: 98304 on job bdev=Nvme0n1 fails 00:05:40.511 00:05:40.511 Latency(us) 00:05:40.511 [2024-11-04T15:17:07.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:40.511 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:40.511 Job: Nvme0n1 ended in about 0.40 seconds with error 00:05:40.511 Verification LBA range: start 0x0 length 0x400 00:05:40.511 Nvme0n1 : 0.40 1905.09 119.07 158.76 0.00 30190.00 2902.31 27213.04 00:05:40.511 [2024-11-04T15:17:07.335Z] =================================================================================================================== 00:05:40.511 [2024-11-04T15:17:07.335Z] Total : 1905.09 119.07 158.76 0.00 30190.00 2902.31 27213.04 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.512 [2024-11-04 16:17:07.092057] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.512 [2024-11-04 16:17:07.092077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1205500 (9): Bad file descriptor 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.512 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:05:40.512 [2024-11-04 16:17:07.100819] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2656602 00:05:41.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2656602) - No such process 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:41.449 { 00:05:41.449 "params": { 00:05:41.449 "name": "Nvme$subsystem", 00:05:41.449 "trtype": "$TEST_TRANSPORT", 00:05:41.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:41.449 "adrfam": "ipv4", 00:05:41.449 "trsvcid": "$NVMF_PORT", 00:05:41.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:41.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:41.449 "hdgst": ${hdgst:-false}, 00:05:41.449 "ddgst": ${ddgst:-false} 00:05:41.449 }, 00:05:41.449 "method": "bdev_nvme_attach_controller" 00:05:41.449 } 00:05:41.449 EOF 00:05:41.449 )") 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:41.449 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:41.449 "params": { 00:05:41.449 "name": "Nvme0", 00:05:41.449 "trtype": "tcp", 00:05:41.449 "traddr": "10.0.0.2", 00:05:41.449 "adrfam": "ipv4", 00:05:41.449 "trsvcid": "4420", 00:05:41.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:41.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:41.449 "hdgst": false, 00:05:41.449 "ddgst": false 00:05:41.449 }, 00:05:41.449 "method": "bdev_nvme_attach_controller" 00:05:41.449 }' 00:05:41.449 [2024-11-04 16:17:08.156025] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:05:41.449 [2024-11-04 16:17:08.156072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2656918 ] 00:05:41.449 [2024-11-04 16:17:08.221198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.449 [2024-11-04 16:17:08.260456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.018 Running I/O for 1 seconds... 00:05:42.958 1984.00 IOPS, 124.00 MiB/s 00:05:42.958 Latency(us) 00:05:42.958 [2024-11-04T15:17:09.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:42.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:42.958 Verification LBA range: start 0x0 length 0x400 00:05:42.958 Nvme0n1 : 1.02 2014.37 125.90 0.00 0.00 31285.28 4930.80 27088.21 00:05:42.958 [2024-11-04T15:17:09.782Z] =================================================================================================================== 00:05:42.958 [2024-11-04T15:17:09.782Z] Total : 2014.37 125.90 0.00 0.00 31285.28 4930.80 27088.21 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:42.958 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:42.958 rmmod nvme_tcp 00:05:43.222 rmmod nvme_fabrics 00:05:43.222 rmmod nvme_keyring 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2656400 ']' 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2656400 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2656400 ']' 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2656400 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656400 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656400' 00:05:43.222 killing process with pid 2656400 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2656400 00:05:43.222 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2656400 00:05:43.481 [2024-11-04 16:17:10.057328] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.481 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:05:45.387 00:05:45.387 real 0m12.098s 00:05:45.387 user 0m20.288s 00:05:45.387 sys 0m5.236s 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:45.387 ************************************ 00:05:45.387 END TEST nvmf_host_management 00:05:45.387 ************************************ 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.387 16:17:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.647 ************************************ 00:05:45.647 START TEST nvmf_lvol 00:05:45.647 ************************************ 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:45.647 * Looking for test storage... 00:05:45.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.647 --rc genhtml_branch_coverage=1 00:05:45.647 --rc genhtml_function_coverage=1 00:05:45.647 --rc genhtml_legend=1 00:05:45.647 --rc geninfo_all_blocks=1 00:05:45.647 --rc geninfo_unexecuted_blocks=1 00:05:45.647 00:05:45.647 ' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.647 --rc genhtml_branch_coverage=1 00:05:45.647 --rc genhtml_function_coverage=1 00:05:45.647 --rc genhtml_legend=1 00:05:45.647 --rc geninfo_all_blocks=1 00:05:45.647 --rc geninfo_unexecuted_blocks=1 00:05:45.647 00:05:45.647 ' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.647 --rc genhtml_branch_coverage=1 00:05:45.647 --rc genhtml_function_coverage=1 00:05:45.647 --rc genhtml_legend=1 00:05:45.647 --rc geninfo_all_blocks=1 00:05:45.647 --rc geninfo_unexecuted_blocks=1 00:05:45.647 00:05:45.647 ' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.647 --rc genhtml_branch_coverage=1 00:05:45.647 --rc genhtml_function_coverage=1 00:05:45.647 --rc genhtml_legend=1 00:05:45.647 --rc geninfo_all_blocks=1 00:05:45.647 --rc geninfo_unexecuted_blocks=1 00:05:45.647 00:05:45.647 ' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.647 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.648 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.218 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:52.219 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:52.219 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:52.219 Found net devices under 0000:86:00.0: cvl_0_0 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:52.219 Found net devices under 0000:86:00.1: cvl_0_1 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.219 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:05:52.219 00:05:52.219 --- 10.0.0.2 ping statistics --- 00:05:52.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.219 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:05:52.219 00:05:52.219 --- 10.0.0.1 ping statistics --- 00:05:52.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.219 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2660689 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2660689 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2660689 ']' 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.219 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.219 [2024-11-04 16:17:18.289878] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:05:52.219 [2024-11-04 16:17:18.289930] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.219 [2024-11-04 16:17:18.360659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.219 [2024-11-04 16:17:18.405501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.219 [2024-11-04 16:17:18.405536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.219 [2024-11-04 16:17:18.405544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.219 [2024-11-04 16:17:18.405551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.219 [2024-11-04 16:17:18.405557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.219 [2024-11-04 16:17:18.406994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.219 [2024-11-04 16:17:18.407093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.220 [2024-11-04 16:17:18.407095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:52.220 [2024-11-04 16:17:18.707805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:05:52.220 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:52.479 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:05:52.479 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:05:52.738 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:05:52.738 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5689d59f-0823-41fb-a767-383268e8edb4 00:05:52.738 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5689d59f-0823-41fb-a767-383268e8edb4 lvol 20 00:05:52.997 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d5456ebb-c6e7-4114-ab53-e9fe58c7ff30 00:05:52.997 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:53.258 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5456ebb-c6e7-4114-ab53-e9fe58c7ff30 00:05:53.516 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:53.516 [2024-11-04 16:17:20.296026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.516 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.775 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2661182 00:05:53.775 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:05:53.775 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:05:54.711 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d5456ebb-c6e7-4114-ab53-e9fe58c7ff30 MY_SNAPSHOT 00:05:54.970 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cb40349a-facf-4985-8c5c-6879beddb56f 00:05:54.970 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d5456ebb-c6e7-4114-ab53-e9fe58c7ff30 30 00:05:55.229 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cb40349a-facf-4985-8c5c-6879beddb56f MY_CLONE 00:05:55.488 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=64aa4c85-6faf-4527-b799-5c12ad5a105f 00:05:55.488 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 64aa4c85-6faf-4527-b799-5c12ad5a105f 00:05:56.055 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2661182 00:06:04.174 Initializing NVMe Controllers 00:06:04.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.174 Controller IO queue size 128, less than required. 00:06:04.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:04.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:04.174 Initialization complete. Launching workers. 00:06:04.174 ======================================================== 00:06:04.174 Latency(us) 00:06:04.174 Device Information : IOPS MiB/s Average min max 00:06:04.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12158.77 47.50 10535.14 1585.81 99798.92 00:06:04.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12055.88 47.09 10622.68 2578.57 41885.09 00:06:04.174 ======================================================== 00:06:04.174 Total : 24214.65 94.59 10578.72 1585.81 99798.92 00:06:04.174 00:06:04.174 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.433 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5456ebb-c6e7-4114-ab53-e9fe58c7ff30 00:06:04.692 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5689d59f-0823-41fb-a767-383268e8edb4 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.951 rmmod nvme_tcp 00:06:04.951 rmmod nvme_fabrics 00:06:04.951 rmmod nvme_keyring 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2660689 ']' 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2660689 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2660689 ']' 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2660689 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660689 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660689' 00:06:04.951 killing process with pid 2660689 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2660689 00:06:04.951 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2660689 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.211 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.747 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:07.747 00:06:07.747 real 0m21.737s 00:06:07.747 user 1m3.090s 00:06:07.747 sys 0m7.315s 00:06:07.747 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.747 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:07.747 ************************************ 00:06:07.747 END TEST nvmf_lvol 00:06:07.747 ************************************ 00:06:07.747 16:17:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:07.747 16:17:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.747 ************************************ 00:06:07.747 START TEST nvmf_lvs_grow 00:06:07.747 ************************************ 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:07.747 * Looking for test storage... 00:06:07.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.747 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.748 --rc genhtml_branch_coverage=1 00:06:07.748 --rc genhtml_function_coverage=1 00:06:07.748 --rc genhtml_legend=1 00:06:07.748 --rc geninfo_all_blocks=1 00:06:07.748 --rc geninfo_unexecuted_blocks=1 00:06:07.748 00:06:07.748 ' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.748 --rc genhtml_branch_coverage=1 00:06:07.748 --rc genhtml_function_coverage=1 00:06:07.748 --rc genhtml_legend=1 00:06:07.748 --rc geninfo_all_blocks=1 00:06:07.748 --rc geninfo_unexecuted_blocks=1 00:06:07.748 00:06:07.748 ' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.748 --rc genhtml_branch_coverage=1 00:06:07.748 --rc genhtml_function_coverage=1 00:06:07.748 --rc genhtml_legend=1 00:06:07.748 --rc geninfo_all_blocks=1 00:06:07.748 --rc geninfo_unexecuted_blocks=1 00:06:07.748 00:06:07.748 ' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.748 --rc genhtml_branch_coverage=1 00:06:07.748 --rc genhtml_function_coverage=1 00:06:07.748 --rc genhtml_legend=1 00:06:07.748 --rc geninfo_all_blocks=1 00:06:07.748 --rc geninfo_unexecuted_blocks=1 00:06:07.748 00:06:07.748 ' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.748 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:13.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.022 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:13.282 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:13.282 Found net devices under 0000:86:00.0: cvl_0_0 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:13.282 Found net devices under 0000:86:00.1: cvl_0_1 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.282 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.283 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:06:13.283 00:06:13.283 --- 10.0.0.2 ping statistics --- 00:06:13.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.283 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:06:13.283 00:06:13.283 --- 10.0.0.1 ping statistics --- 00:06:13.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.283 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.283 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2666565 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2666565 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2666565 ']' 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.542 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.542 [2024-11-04 16:17:40.198999] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:13.542 [2024-11-04 16:17:40.199042] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.542 [2024-11-04 16:17:40.267182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.542 [2024-11-04 16:17:40.307463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.543 [2024-11-04 16:17:40.307508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.543 [2024-11-04 16:17:40.307516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.543 [2024-11-04 16:17:40.307522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.543 [2024-11-04 16:17:40.307527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.543 [2024-11-04 16:17:40.308101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.801 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:13.801 [2024-11-04 16:17:40.616565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:14.060 ************************************ 00:06:14.060 START TEST lvs_grow_clean 00:06:14.060 ************************************ 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:14.060 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:14.318 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:14.318 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:14.318 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:14.318 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:14.318 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:14.576 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:14.576 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:14.576 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 lvol 150 00:06:14.834 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3bb15ead-e68a-4d5f-a29a-e10bfd300886 00:06:14.834 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:14.834 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:14.834 [2024-11-04 16:17:41.604335] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:14.834 [2024-11-04 16:17:41.604394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:14.834 true 00:06:14.834 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:14.834 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:15.092 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:15.092 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.350 16:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3bb15ead-e68a-4d5f-a29a-e10bfd300886 00:06:15.608 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.608 [2024-11-04 16:17:42.342578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.608 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2667064 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2667064 /var/tmp/bdevperf.sock 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2667064 ']' 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.866 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:15.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:15.867 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.867 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:15.867 [2024-11-04 16:17:42.567432] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:15.867 [2024-11-04 16:17:42.567480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667064 ] 00:06:15.867 [2024-11-04 16:17:42.629957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.867 [2024-11-04 16:17:42.670035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.125 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.125 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:16.125 16:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:16.382 Nvme0n1 00:06:16.382 16:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:16.382 [ 00:06:16.382 { 00:06:16.382 "name": "Nvme0n1", 00:06:16.382 "aliases": [ 00:06:16.382 "3bb15ead-e68a-4d5f-a29a-e10bfd300886" 00:06:16.382 ], 00:06:16.382 "product_name": "NVMe disk", 00:06:16.382 "block_size": 4096, 00:06:16.382 "num_blocks": 38912, 00:06:16.382 "uuid": "3bb15ead-e68a-4d5f-a29a-e10bfd300886", 00:06:16.382 "numa_id": 1, 00:06:16.382 "assigned_rate_limits": { 00:06:16.382 "rw_ios_per_sec": 0, 00:06:16.382 "rw_mbytes_per_sec": 0, 00:06:16.382 "r_mbytes_per_sec": 0, 00:06:16.382 "w_mbytes_per_sec": 0 00:06:16.382 }, 00:06:16.382 "claimed": false, 00:06:16.382 "zoned": false, 00:06:16.382 "supported_io_types": { 00:06:16.382 "read": true, 00:06:16.382 "write": true, 00:06:16.382 "unmap": true, 00:06:16.382 "flush": true, 00:06:16.383 "reset": true, 00:06:16.383 "nvme_admin": true, 00:06:16.383 "nvme_io": true, 00:06:16.383 "nvme_io_md": false, 00:06:16.383 "write_zeroes": true, 00:06:16.383 "zcopy": false, 00:06:16.383 "get_zone_info": false, 00:06:16.383 "zone_management": false, 00:06:16.383 "zone_append": false, 00:06:16.383 "compare": true, 00:06:16.383 "compare_and_write": true, 00:06:16.383 "abort": true, 00:06:16.383 "seek_hole": false, 00:06:16.383 "seek_data": false, 00:06:16.383 "copy": true, 00:06:16.383 "nvme_iov_md": false 00:06:16.383 }, 00:06:16.383 "memory_domains": [ 00:06:16.383 { 00:06:16.383 "dma_device_id": "system", 00:06:16.383 "dma_device_type": 1 00:06:16.383 } 00:06:16.383 ], 00:06:16.383 "driver_specific": { 00:06:16.383 "nvme": [ 00:06:16.383 { 00:06:16.383 "trid": { 00:06:16.383 "trtype": "TCP", 00:06:16.383 "adrfam": "IPv4", 00:06:16.383 "traddr": "10.0.0.2", 00:06:16.383 "trsvcid": "4420", 00:06:16.383 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:16.383 }, 00:06:16.383 "ctrlr_data": { 00:06:16.383 "cntlid": 1, 00:06:16.383 "vendor_id": "0x8086", 00:06:16.383 "model_number": "SPDK bdev Controller", 00:06:16.383 "serial_number": "SPDK0", 00:06:16.383 "firmware_revision": "25.01", 00:06:16.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:16.383 "oacs": { 00:06:16.383 "security": 0, 00:06:16.383 "format": 0, 00:06:16.383 "firmware": 0, 00:06:16.383 "ns_manage": 0 00:06:16.383 }, 00:06:16.383 "multi_ctrlr": true, 00:06:16.383 "ana_reporting": false 00:06:16.383 }, 00:06:16.383 "vs": { 00:06:16.383 "nvme_version": "1.3" 00:06:16.383 }, 00:06:16.383 "ns_data": { 00:06:16.383 "id": 1, 00:06:16.383 "can_share": true 00:06:16.383 } 00:06:16.383 } 00:06:16.383 ], 00:06:16.383 "mp_policy": "active_passive" 00:06:16.383 } 00:06:16.383 } 00:06:16.383 ] 00:06:16.383 16:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2667098 00:06:16.383 16:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:16.383 16:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:16.641 Running I/O for 10 seconds... 00:06:17.575 Latency(us) 00:06:17.575 [2024-11-04T15:17:44.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:17.575 Nvme0n1 : 1.00 23531.00 91.92 0.00 0.00 0.00 0.00 0.00 00:06:17.575 [2024-11-04T15:17:44.399Z] =================================================================================================================== 00:06:17.575 [2024-11-04T15:17:44.399Z] Total : 23531.00 91.92 0.00 0.00 0.00 0.00 0.00 00:06:17.575 00:06:18.509 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:18.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:18.509 Nvme0n1 : 2.00 23642.50 92.35 0.00 0.00 0.00 0.00 0.00 00:06:18.509 [2024-11-04T15:17:45.333Z] =================================================================================================================== 00:06:18.509 [2024-11-04T15:17:45.333Z] Total : 23642.50 92.35 0.00 0.00 0.00 0.00 0.00 00:06:18.509 00:06:18.767 true 00:06:18.767 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:18.767 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:19.025 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:19.025 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:19.025 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2667098 00:06:19.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:19.591 Nvme0n1 : 3.00 23700.33 92.58 0.00 0.00 0.00 0.00 0.00 00:06:19.591 [2024-11-04T15:17:46.415Z] =================================================================================================================== 00:06:19.591 [2024-11-04T15:17:46.415Z] Total : 23700.33 92.58 0.00 0.00 0.00 0.00 0.00 00:06:19.591 00:06:20.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:20.525 Nvme0n1 : 4.00 23764.50 92.83 0.00 0.00 0.00 0.00 0.00 00:06:20.525 [2024-11-04T15:17:47.349Z] =================================================================================================================== 00:06:20.525 [2024-11-04T15:17:47.349Z] Total : 23764.50 92.83 0.00 0.00 0.00 0.00 0.00 00:06:20.525 00:06:21.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:21.900 Nvme0n1 : 5.00 23812.40 93.02 0.00 0.00 0.00 0.00 0.00 00:06:21.900 [2024-11-04T15:17:48.724Z] =================================================================================================================== 00:06:21.900 [2024-11-04T15:17:48.724Z] Total : 23812.40 93.02 0.00 0.00 0.00 0.00 0.00 00:06:21.900 00:06:22.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:22.834 Nvme0n1 : 6.00 23844.67 93.14 0.00 0.00 0.00 0.00 0.00 00:06:22.834 [2024-11-04T15:17:49.658Z] =================================================================================================================== 00:06:22.834 [2024-11-04T15:17:49.658Z] Total : 23844.67 93.14 0.00 0.00 0.00 0.00 0.00 00:06:22.834 00:06:23.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:23.766 Nvme0n1 : 7.00 23831.29 93.09 0.00 0.00 0.00 0.00 0.00 00:06:23.766 [2024-11-04T15:17:50.590Z] =================================================================================================================== 00:06:23.766 [2024-11-04T15:17:50.590Z] Total : 23831.29 93.09 0.00 0.00 0.00 0.00 0.00 00:06:23.766 00:06:24.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:24.846 Nvme0n1 : 8.00 23837.38 93.11 0.00 0.00 0.00 0.00 0.00 00:06:24.846 [2024-11-04T15:17:51.670Z] =================================================================================================================== 00:06:24.846 [2024-11-04T15:17:51.670Z] Total : 23837.38 93.11 0.00 0.00 0.00 0.00 0.00 00:06:24.846 00:06:25.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:25.796 Nvme0n1 : 9.00 23856.44 93.19 0.00 0.00 0.00 0.00 0.00 00:06:25.796 [2024-11-04T15:17:52.620Z] =================================================================================================================== 00:06:25.796 [2024-11-04T15:17:52.620Z] Total : 23856.44 93.19 0.00 0.00 0.00 0.00 0.00 00:06:25.796 00:06:26.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:26.732 Nvme0n1 : 10.00 23865.20 93.22 0.00 0.00 0.00 0.00 0.00 00:06:26.732 [2024-11-04T15:17:53.556Z] =================================================================================================================== 00:06:26.732 [2024-11-04T15:17:53.556Z] Total : 23865.20 93.22 0.00 0.00 0.00 0.00 0.00 00:06:26.732 00:06:26.732 00:06:26.732 Latency(us) 00:06:26.732 [2024-11-04T15:17:53.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:26.732 Nvme0n1 : 10.00 23867.81 93.23 0.00 0.00 5359.91 3120.76 11546.82 00:06:26.732 [2024-11-04T15:17:53.556Z] =================================================================================================================== 00:06:26.732 [2024-11-04T15:17:53.556Z] Total : 23867.81 93.23 0.00 0.00 5359.91 3120.76 11546.82 00:06:26.732 { 00:06:26.732 "results": [ 00:06:26.732 { 00:06:26.732 "job": "Nvme0n1", 00:06:26.732 "core_mask": "0x2", 00:06:26.732 "workload": "randwrite", 00:06:26.732 "status": "finished", 00:06:26.732 "queue_depth": 128, 00:06:26.732 "io_size": 4096, 00:06:26.732 "runtime": 10.004269, 00:06:26.732 "iops": 23867.81083155601, 00:06:26.732 "mibps": 93.23363606076566, 00:06:26.732 "io_failed": 0, 00:06:26.732 "io_timeout": 0, 00:06:26.732 "avg_latency_us": 5359.910335826165, 00:06:26.732 "min_latency_us": 3120.7619047619046, 00:06:26.732 "max_latency_us": 11546.819047619048 00:06:26.732 } 00:06:26.732 ], 00:06:26.732 "core_count": 1 00:06:26.732 } 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2667064 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2667064 ']' 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2667064 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667064 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667064' 00:06:26.732 killing process with pid 2667064 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2667064 00:06:26.732 Received shutdown signal, test time was about 10.000000 seconds 00:06:26.732 00:06:26.732 Latency(us) 00:06:26.732 [2024-11-04T15:17:53.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.732 [2024-11-04T15:17:53.556Z] =================================================================================================================== 00:06:26.732 [2024-11-04T15:17:53.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2667064 00:06:26.732 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.991 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:27.250 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:27.250 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:27.509 [2024-11-04 16:17:54.275170] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:27.509 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:27.768 request: 00:06:27.768 { 00:06:27.768 "uuid": "ebd8466b-72f5-46db-87c6-cb2bad1f6294", 00:06:27.768 "method": "bdev_lvol_get_lvstores", 00:06:27.768 "req_id": 1 00:06:27.768 } 00:06:27.768 Got JSON-RPC error response 00:06:27.768 response: 00:06:27.768 { 00:06:27.768 "code": -19, 00:06:27.768 "message": "No such device" 00:06:27.768 } 00:06:27.768 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:27.768 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.768 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.768 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.768 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:28.027 aio_bdev 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3bb15ead-e68a-4d5f-a29a-e10bfd300886 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3bb15ead-e68a-4d5f-a29a-e10bfd300886 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:28.027 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3bb15ead-e68a-4d5f-a29a-e10bfd300886 -t 2000 00:06:28.286 [ 00:06:28.286 { 00:06:28.286 "name": "3bb15ead-e68a-4d5f-a29a-e10bfd300886", 00:06:28.286 "aliases": [ 00:06:28.286 "lvs/lvol" 00:06:28.286 ], 00:06:28.286 "product_name": "Logical Volume", 00:06:28.286 "block_size": 4096, 00:06:28.286 "num_blocks": 38912, 00:06:28.286 "uuid": "3bb15ead-e68a-4d5f-a29a-e10bfd300886", 00:06:28.286 "assigned_rate_limits": { 00:06:28.286 "rw_ios_per_sec": 0, 00:06:28.286 "rw_mbytes_per_sec": 0, 00:06:28.286 "r_mbytes_per_sec": 0, 00:06:28.286 "w_mbytes_per_sec": 0 00:06:28.286 }, 00:06:28.286 "claimed": false, 00:06:28.286 "zoned": false, 00:06:28.286 "supported_io_types": { 00:06:28.286 "read": true, 00:06:28.286 "write": true, 00:06:28.286 "unmap": true, 00:06:28.286 "flush": false, 00:06:28.286 "reset": true, 00:06:28.286 "nvme_admin": false, 00:06:28.286 "nvme_io": false, 00:06:28.286 "nvme_io_md": false, 00:06:28.286 "write_zeroes": true, 00:06:28.286 "zcopy": false, 00:06:28.286 "get_zone_info": false, 00:06:28.286 "zone_management": false, 00:06:28.286 "zone_append": false, 00:06:28.286 "compare": false, 00:06:28.286 "compare_and_write": false, 00:06:28.286 "abort": false, 00:06:28.286 "seek_hole": true, 00:06:28.286 "seek_data": true, 00:06:28.286 "copy": false, 00:06:28.286 "nvme_iov_md": false 00:06:28.286 }, 00:06:28.286 "driver_specific": { 00:06:28.286 "lvol": { 00:06:28.286 "lvol_store_uuid": "ebd8466b-72f5-46db-87c6-cb2bad1f6294", 00:06:28.286 "base_bdev": "aio_bdev", 00:06:28.286 "thin_provision": false, 00:06:28.286 "num_allocated_clusters": 38, 00:06:28.286 "snapshot": false, 00:06:28.286 "clone": false, 00:06:28.286 "esnap_clone": false 00:06:28.286 } 00:06:28.286 } 00:06:28.286 } 00:06:28.286 ] 00:06:28.286 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:28.286 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:28.286 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:28.545 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:28.545 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:28.545 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:28.804 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:28.804 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3bb15ead-e68a-4d5f-a29a-e10bfd300886 00:06:28.804 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebd8466b-72f5-46db-87c6-cb2bad1f6294 00:06:29.063 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:29.322 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:29.322 00:06:29.322 real 0m15.346s 00:06:29.322 user 0m14.910s 00:06:29.322 sys 0m1.417s 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:29.322 ************************************ 00:06:29.322 END TEST lvs_grow_clean 00:06:29.322 ************************************ 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.322 ************************************ 00:06:29.322 START TEST lvs_grow_dirty 00:06:29.322 ************************************ 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:29.322 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:29.579 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:29.579 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:29.837 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:29.837 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:29.837 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 lvol 150 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8963881d-8465-4235-b8ab-df9d1907987f 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.096 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:30.355 [2024-11-04 16:17:57.037386] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:30.355 [2024-11-04 16:17:57.037436] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:30.355 true 00:06:30.355 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:30.355 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:30.612 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:30.613 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.613 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8963881d-8465-4235-b8ab-df9d1907987f 00:06:30.871 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.129 [2024-11-04 16:17:57.759537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.129 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.388 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2669672 00:06:31.388 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:31.388 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2669672 /var/tmp/bdevperf.sock 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2669672 ']' 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.389 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:31.389 [2024-11-04 16:17:57.997087] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:31.389 [2024-11-04 16:17:57.997134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669672 ] 00:06:31.389 [2024-11-04 16:17:58.059031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.389 [2024-11-04 16:17:58.098956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.389 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.389 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:31.389 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:31.954 Nvme0n1 00:06:31.954 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:31.954 [ 00:06:31.954 { 00:06:31.954 "name": "Nvme0n1", 00:06:31.954 "aliases": [ 00:06:31.954 "8963881d-8465-4235-b8ab-df9d1907987f" 00:06:31.954 ], 00:06:31.954 "product_name": "NVMe disk", 00:06:31.954 "block_size": 4096, 00:06:31.954 "num_blocks": 38912, 00:06:31.954 "uuid": "8963881d-8465-4235-b8ab-df9d1907987f", 00:06:31.954 "numa_id": 1, 00:06:31.954 "assigned_rate_limits": { 00:06:31.954 "rw_ios_per_sec": 0, 00:06:31.954 "rw_mbytes_per_sec": 0, 00:06:31.954 "r_mbytes_per_sec": 0, 00:06:31.954 "w_mbytes_per_sec": 0 00:06:31.954 }, 00:06:31.954 "claimed": false, 00:06:31.954 "zoned": false, 00:06:31.954 "supported_io_types": { 00:06:31.954 "read": true, 00:06:31.954 "write": true, 00:06:31.954 "unmap": true, 00:06:31.954 "flush": true, 00:06:31.954 "reset": true, 00:06:31.954 "nvme_admin": true, 00:06:31.954 "nvme_io": true, 00:06:31.954 "nvme_io_md": false, 00:06:31.954 "write_zeroes": true, 00:06:31.954 "zcopy": false, 00:06:31.954 "get_zone_info": false, 00:06:31.954 "zone_management": false, 00:06:31.954 "zone_append": false, 00:06:31.954 "compare": true, 00:06:31.954 "compare_and_write": true, 00:06:31.954 "abort": true, 00:06:31.954 "seek_hole": false, 00:06:31.954 "seek_data": false, 00:06:31.954 "copy": true, 00:06:31.954 "nvme_iov_md": false 00:06:31.954 }, 00:06:31.954 "memory_domains": [ 00:06:31.954 { 00:06:31.954 "dma_device_id": "system", 00:06:31.954 "dma_device_type": 1 00:06:31.954 } 00:06:31.954 ], 00:06:31.954 "driver_specific": { 00:06:31.954 "nvme": [ 00:06:31.954 { 00:06:31.954 "trid": { 00:06:31.954 "trtype": "TCP", 00:06:31.954 "adrfam": "IPv4", 00:06:31.954 "traddr": "10.0.0.2", 00:06:31.954 "trsvcid": "4420", 00:06:31.954 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:31.954 }, 00:06:31.954 "ctrlr_data": { 00:06:31.954 "cntlid": 1, 00:06:31.954 "vendor_id": "0x8086", 00:06:31.954 "model_number": "SPDK bdev Controller", 00:06:31.954 "serial_number": "SPDK0", 00:06:31.954 "firmware_revision": "25.01", 00:06:31.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:31.954 "oacs": { 00:06:31.954 "security": 0, 00:06:31.954 "format": 0, 00:06:31.954 "firmware": 0, 00:06:31.954 "ns_manage": 0 00:06:31.954 }, 00:06:31.954 "multi_ctrlr": true, 00:06:31.954 "ana_reporting": false 00:06:31.954 }, 00:06:31.954 "vs": { 00:06:31.954 "nvme_version": "1.3" 00:06:31.954 }, 00:06:31.954 "ns_data": { 00:06:31.954 "id": 1, 00:06:31.954 "can_share": true 00:06:31.954 } 00:06:31.954 } 00:06:31.954 ], 00:06:31.954 "mp_policy": "active_passive" 00:06:31.954 } 00:06:31.954 } 00:06:31.954 ] 00:06:31.954 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2669904 00:06:31.954 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:31.954 16:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:32.212 Running I/O for 10 seconds... 00:06:33.147 Latency(us) 00:06:33.147 [2024-11-04T15:17:59.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:33.147 Nvme0n1 : 1.00 22501.00 87.89 0.00 0.00 0.00 0.00 0.00 00:06:33.147 [2024-11-04T15:17:59.971Z] =================================================================================================================== 00:06:33.147 [2024-11-04T15:17:59.971Z] Total : 22501.00 87.89 0.00 0.00 0.00 0.00 0.00 00:06:33.147 00:06:34.080 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:34.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:34.080 Nvme0n1 : 2.00 22290.50 87.07 0.00 0.00 0.00 0.00 0.00 00:06:34.080 [2024-11-04T15:18:00.904Z] =================================================================================================================== 00:06:34.080 [2024-11-04T15:18:00.904Z] Total : 22290.50 87.07 0.00 0.00 0.00 0.00 0.00 00:06:34.080 00:06:34.338 true 00:06:34.338 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:34.338 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:34.338 16:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:34.338 16:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:34.338 16:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2669904 00:06:35.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:35.274 Nvme0n1 : 3.00 22356.33 87.33 0.00 0.00 0.00 0.00 0.00 00:06:35.274 [2024-11-04T15:18:02.098Z] =================================================================================================================== 00:06:35.274 [2024-11-04T15:18:02.098Z] Total : 22356.33 87.33 0.00 0.00 0.00 0.00 0.00 00:06:35.274 00:06:36.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:36.209 Nvme0n1 : 4.00 22461.25 87.74 0.00 0.00 0.00 0.00 0.00 00:06:36.209 [2024-11-04T15:18:03.033Z] =================================================================================================================== 00:06:36.209 [2024-11-04T15:18:03.033Z] Total : 22461.25 87.74 0.00 0.00 0.00 0.00 0.00 00:06:36.209 00:06:37.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.143 Nvme0n1 : 5.00 22498.60 87.89 0.00 0.00 0.00 0.00 0.00 00:06:37.143 [2024-11-04T15:18:03.967Z] =================================================================================================================== 00:06:37.143 [2024-11-04T15:18:03.967Z] Total : 22498.60 87.89 0.00 0.00 0.00 0.00 0.00 00:06:37.143 00:06:38.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.078 Nvme0n1 : 6.00 22523.50 87.98 0.00 0.00 0.00 0.00 0.00 00:06:38.078 [2024-11-04T15:18:04.902Z] =================================================================================================================== 00:06:38.078 [2024-11-04T15:18:04.902Z] Total : 22523.50 87.98 0.00 0.00 0.00 0.00 0.00 00:06:38.078 00:06:39.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:39.454 Nvme0n1 : 7.00 22552.71 88.10 0.00 0.00 0.00 0.00 0.00 00:06:39.454 [2024-11-04T15:18:06.278Z] =================================================================================================================== 00:06:39.454 [2024-11-04T15:18:06.278Z] Total : 22552.71 88.10 0.00 0.00 0.00 0.00 0.00 00:06:39.454 00:06:40.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:40.390 Nvme0n1 : 8.00 22573.62 88.18 0.00 0.00 0.00 0.00 0.00 00:06:40.390 [2024-11-04T15:18:07.214Z] =================================================================================================================== 00:06:40.390 [2024-11-04T15:18:07.214Z] Total : 22573.62 88.18 0.00 0.00 0.00 0.00 0.00 00:06:40.390 00:06:41.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.326 Nvme0n1 : 9.00 22589.00 88.24 0.00 0.00 0.00 0.00 0.00 00:06:41.326 [2024-11-04T15:18:08.150Z] =================================================================================================================== 00:06:41.326 [2024-11-04T15:18:08.150Z] Total : 22589.00 88.24 0.00 0.00 0.00 0.00 0.00 00:06:41.326 00:06:42.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.262 Nvme0n1 : 10.00 22571.70 88.17 0.00 0.00 0.00 0.00 0.00 00:06:42.262 [2024-11-04T15:18:09.086Z] =================================================================================================================== 00:06:42.262 [2024-11-04T15:18:09.086Z] Total : 22571.70 88.17 0.00 0.00 0.00 0.00 0.00 00:06:42.262 00:06:42.262 00:06:42.262 Latency(us) 00:06:42.262 [2024-11-04T15:18:09.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.262 Nvme0n1 : 10.01 22571.87 88.17 0.00 0.00 5666.75 4275.44 10985.08 00:06:42.262 [2024-11-04T15:18:09.086Z] =================================================================================================================== 00:06:42.262 [2024-11-04T15:18:09.086Z] Total : 22571.87 88.17 0.00 0.00 5666.75 4275.44 10985.08 00:06:42.262 { 00:06:42.262 "results": [ 00:06:42.262 { 00:06:42.262 "job": "Nvme0n1", 00:06:42.262 "core_mask": "0x2", 00:06:42.262 "workload": "randwrite", 00:06:42.262 "status": "finished", 00:06:42.262 "queue_depth": 128, 00:06:42.263 "io_size": 4096, 00:06:42.263 "runtime": 10.005241, 00:06:42.263 "iops": 22571.870082889556, 00:06:42.263 "mibps": 88.17136751128733, 00:06:42.263 "io_failed": 0, 00:06:42.263 "io_timeout": 0, 00:06:42.263 "avg_latency_us": 5666.745870795561, 00:06:42.263 "min_latency_us": 4275.443809523809, 00:06:42.263 "max_latency_us": 10985.081904761904 00:06:42.263 } 00:06:42.263 ], 00:06:42.263 "core_count": 1 00:06:42.263 } 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2669672 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2669672 ']' 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2669672 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669672 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669672' 00:06:42.263 killing process with pid 2669672 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2669672 00:06:42.263 Received shutdown signal, test time was about 10.000000 seconds 00:06:42.263 00:06:42.263 Latency(us) 00:06:42.263 [2024-11-04T15:18:09.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.263 [2024-11-04T15:18:09.087Z] =================================================================================================================== 00:06:42.263 [2024-11-04T15:18:09.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:42.263 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2669672 00:06:42.521 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:42.521 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:42.780 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:42.780 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2666565 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2666565 00:06:43.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2666565 Killed "${NVMF_APP[@]}" "$@" 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2672144 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2672144 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2672144 ']' 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.044 16:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:43.044 [2024-11-04 16:18:09.815461] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:43.044 [2024-11-04 16:18:09.815514] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.302 [2024-11-04 16:18:09.884030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.302 [2024-11-04 16:18:09.924147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.302 [2024-11-04 16:18:09.924181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.302 [2024-11-04 16:18:09.924188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.302 [2024-11-04 16:18:09.924194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.302 [2024-11-04 16:18:09.924199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.302 [2024-11-04 16:18:09.924801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.302 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:43.561 [2024-11-04 16:18:10.228921] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:06:43.561 [2024-11-04 16:18:10.229018] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:06:43.561 [2024-11-04 16:18:10.229044] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8963881d-8465-4235-b8ab-df9d1907987f 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8963881d-8465-4235-b8ab-df9d1907987f 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:43.561 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:43.819 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8963881d-8465-4235-b8ab-df9d1907987f -t 2000 00:06:43.819 [ 00:06:43.819 { 00:06:43.819 "name": "8963881d-8465-4235-b8ab-df9d1907987f", 00:06:43.819 "aliases": [ 00:06:43.819 "lvs/lvol" 00:06:43.819 ], 00:06:43.819 "product_name": "Logical Volume", 00:06:43.819 "block_size": 4096, 00:06:43.819 "num_blocks": 38912, 00:06:43.819 "uuid": "8963881d-8465-4235-b8ab-df9d1907987f", 00:06:43.819 "assigned_rate_limits": { 00:06:43.819 "rw_ios_per_sec": 0, 00:06:43.819 "rw_mbytes_per_sec": 0, 00:06:43.819 "r_mbytes_per_sec": 0, 00:06:43.819 "w_mbytes_per_sec": 0 00:06:43.819 }, 00:06:43.819 "claimed": false, 00:06:43.819 "zoned": false, 00:06:43.819 "supported_io_types": { 00:06:43.819 "read": true, 00:06:43.819 "write": true, 00:06:43.819 "unmap": true, 00:06:43.819 "flush": false, 00:06:43.819 "reset": true, 00:06:43.819 "nvme_admin": false, 00:06:43.819 "nvme_io": false, 00:06:43.819 "nvme_io_md": false, 00:06:43.819 "write_zeroes": true, 00:06:43.819 "zcopy": false, 00:06:43.819 "get_zone_info": false, 00:06:43.819 "zone_management": false, 00:06:43.819 "zone_append": false, 00:06:43.819 "compare": false, 00:06:43.819 "compare_and_write": false, 00:06:43.819 "abort": false, 00:06:43.819 "seek_hole": true, 00:06:43.819 "seek_data": true, 00:06:43.819 "copy": false, 00:06:43.819 "nvme_iov_md": false 00:06:43.819 }, 00:06:43.819 "driver_specific": { 00:06:43.819 "lvol": { 00:06:43.819 "lvol_store_uuid": "e8757b3a-b9bb-4b57-b93c-c3b75398f4b2", 00:06:43.819 "base_bdev": "aio_bdev", 00:06:43.819 "thin_provision": false, 00:06:43.819 "num_allocated_clusters": 38, 00:06:43.819 "snapshot": false, 00:06:43.819 "clone": false, 00:06:43.819 "esnap_clone": false 00:06:43.819 } 00:06:43.819 } 00:06:43.819 } 00:06:43.819 ] 00:06:43.819 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:43.819 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:43.819 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:06:44.078 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:06:44.078 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:44.078 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:06:44.336 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:06:44.336 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:44.336 [2024-11-04 16:18:11.153875] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:44.595 request: 00:06:44.595 { 00:06:44.595 "uuid": "e8757b3a-b9bb-4b57-b93c-c3b75398f4b2", 00:06:44.595 "method": "bdev_lvol_get_lvstores", 00:06:44.595 "req_id": 1 00:06:44.595 } 00:06:44.595 Got JSON-RPC error response 00:06:44.595 response: 00:06:44.595 { 00:06:44.595 "code": -19, 00:06:44.595 "message": "No such device" 00:06:44.595 } 00:06:44.595 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:06:44.596 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.596 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.596 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.596 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:44.854 aio_bdev 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8963881d-8465-4235-b8ab-df9d1907987f 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8963881d-8465-4235-b8ab-df9d1907987f 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:44.854 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:45.113 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8963881d-8465-4235-b8ab-df9d1907987f -t 2000 00:06:45.371 [ 00:06:45.371 { 00:06:45.371 "name": "8963881d-8465-4235-b8ab-df9d1907987f", 00:06:45.371 "aliases": [ 00:06:45.371 "lvs/lvol" 00:06:45.371 ], 00:06:45.371 "product_name": "Logical Volume", 00:06:45.371 "block_size": 4096, 00:06:45.371 "num_blocks": 38912, 00:06:45.371 "uuid": "8963881d-8465-4235-b8ab-df9d1907987f", 00:06:45.371 "assigned_rate_limits": { 00:06:45.371 "rw_ios_per_sec": 0, 00:06:45.371 "rw_mbytes_per_sec": 0, 00:06:45.371 "r_mbytes_per_sec": 0, 00:06:45.371 "w_mbytes_per_sec": 0 00:06:45.371 }, 00:06:45.371 "claimed": false, 00:06:45.371 "zoned": false, 00:06:45.371 "supported_io_types": { 00:06:45.371 "read": true, 00:06:45.371 "write": true, 00:06:45.371 "unmap": true, 00:06:45.371 "flush": false, 00:06:45.371 "reset": true, 00:06:45.371 "nvme_admin": false, 00:06:45.371 "nvme_io": false, 00:06:45.371 "nvme_io_md": false, 00:06:45.371 "write_zeroes": true, 00:06:45.371 "zcopy": false, 00:06:45.371 "get_zone_info": false, 00:06:45.371 "zone_management": false, 00:06:45.371 "zone_append": false, 00:06:45.371 "compare": false, 00:06:45.371 "compare_and_write": false, 00:06:45.371 "abort": false, 00:06:45.371 "seek_hole": true, 00:06:45.371 "seek_data": true, 00:06:45.371 "copy": false, 00:06:45.371 "nvme_iov_md": false 00:06:45.371 }, 00:06:45.371 "driver_specific": { 00:06:45.371 "lvol": { 00:06:45.371 "lvol_store_uuid": "e8757b3a-b9bb-4b57-b93c-c3b75398f4b2", 00:06:45.371 "base_bdev": "aio_bdev", 00:06:45.371 "thin_provision": false, 00:06:45.371 "num_allocated_clusters": 38, 00:06:45.371 "snapshot": false, 00:06:45.371 "clone": false, 00:06:45.371 "esnap_clone": false 00:06:45.372 } 00:06:45.372 } 00:06:45.372 } 00:06:45.372 ] 00:06:45.372 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:45.372 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:45.372 16:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:45.372 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:45.372 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:45.372 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:45.630 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:45.630 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8963881d-8465-4235-b8ab-df9d1907987f 00:06:45.889 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8757b3a-b9bb-4b57-b93c-c3b75398f4b2 00:06:45.889 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:46.148 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.148 00:06:46.148 real 0m16.848s 00:06:46.148 user 0m43.042s 00:06:46.148 sys 0m4.063s 00:06:46.148 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.148 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:46.148 ************************************ 00:06:46.148 END TEST lvs_grow_dirty 00:06:46.148 ************************************ 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:06:46.407 16:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:06:46.407 nvmf_trace.0 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.407 rmmod nvme_tcp 00:06:46.407 rmmod nvme_fabrics 00:06:46.407 rmmod nvme_keyring 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2672144 ']' 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2672144 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2672144 ']' 00:06:46.407 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2672144 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672144 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672144' 00:06:46.408 killing process with pid 2672144 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2672144 00:06:46.408 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2672144 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.667 16:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.570 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.570 00:06:48.570 real 0m41.346s 00:06:48.570 user 1m3.546s 00:06:48.570 sys 0m10.309s 00:06:48.570 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.570 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.570 ************************************ 00:06:48.570 END TEST nvmf_lvs_grow 00:06:48.570 ************************************ 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.829 ************************************ 00:06:48.829 START TEST nvmf_bdev_io_wait 00:06:48.829 ************************************ 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:48.829 * Looking for test storage... 00:06:48.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.829 --rc genhtml_branch_coverage=1 00:06:48.829 --rc genhtml_function_coverage=1 00:06:48.829 --rc genhtml_legend=1 00:06:48.829 --rc geninfo_all_blocks=1 00:06:48.829 --rc geninfo_unexecuted_blocks=1 00:06:48.829 00:06:48.829 ' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.829 --rc genhtml_branch_coverage=1 00:06:48.829 --rc genhtml_function_coverage=1 00:06:48.829 --rc genhtml_legend=1 00:06:48.829 --rc geninfo_all_blocks=1 00:06:48.829 --rc geninfo_unexecuted_blocks=1 00:06:48.829 00:06:48.829 ' 00:06:48.829 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.829 --rc genhtml_branch_coverage=1 00:06:48.829 --rc genhtml_function_coverage=1 00:06:48.829 --rc genhtml_legend=1 00:06:48.829 --rc geninfo_all_blocks=1 00:06:48.829 --rc geninfo_unexecuted_blocks=1 00:06:48.829 00:06:48.830 ' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.830 --rc genhtml_branch_coverage=1 00:06:48.830 --rc genhtml_function_coverage=1 00:06:48.830 --rc genhtml_legend=1 00:06:48.830 --rc geninfo_all_blocks=1 00:06:48.830 --rc geninfo_unexecuted_blocks=1 00:06:48.830 00:06:48.830 ' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.830 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.101 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:54.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:54.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:54.102 Found net devices under 0000:86:00.0: cvl_0_0 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:54.102 Found net devices under 0000:86:00.1: cvl_0_1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:54.102 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:54.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:06:54.102 00:06:54.102 --- 10.0.0.2 ping statistics --- 00:06:54.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.102 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:06:54.103 00:06:54.103 --- 10.0.0.1 ping statistics --- 00:06:54.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.103 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2676234 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2676234 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2676234 ']' 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 [2024-11-04 16:18:20.610937] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:54.103 [2024-11-04 16:18:20.610977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.103 [2024-11-04 16:18:20.675620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.103 [2024-11-04 16:18:20.719529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.103 [2024-11-04 16:18:20.719565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.103 [2024-11-04 16:18:20.719572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.103 [2024-11-04 16:18:20.719578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.103 [2024-11-04 16:18:20.719583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.103 [2024-11-04 16:18:20.721038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.103 [2024-11-04 16:18:20.721137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.103 [2024-11-04 16:18:20.721257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.103 [2024-11-04 16:18:20.721259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 [2024-11-04 16:18:20.873327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 Malloc0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.103 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:54.103 [2024-11-04 16:18:20.920696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2676341 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2676343 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.363 { 00:06:54.363 "params": { 00:06:54.363 "name": "Nvme$subsystem", 00:06:54.363 "trtype": "$TEST_TRANSPORT", 00:06:54.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.363 "adrfam": "ipv4", 00:06:54.363 "trsvcid": "$NVMF_PORT", 00:06:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.363 "hdgst": ${hdgst:-false}, 00:06:54.363 "ddgst": ${ddgst:-false} 00:06:54.363 }, 00:06:54.363 "method": "bdev_nvme_attach_controller" 00:06:54.363 } 00:06:54.363 EOF 00:06:54.363 )") 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2676345 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.363 { 00:06:54.363 "params": { 00:06:54.363 "name": "Nvme$subsystem", 00:06:54.363 "trtype": "$TEST_TRANSPORT", 00:06:54.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.363 "adrfam": "ipv4", 00:06:54.363 "trsvcid": "$NVMF_PORT", 00:06:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.363 "hdgst": ${hdgst:-false}, 00:06:54.363 "ddgst": ${ddgst:-false} 00:06:54.363 }, 00:06:54.363 "method": "bdev_nvme_attach_controller" 00:06:54.363 } 00:06:54.363 EOF 00:06:54.363 )") 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2676348 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.363 { 00:06:54.363 "params": { 00:06:54.363 "name": "Nvme$subsystem", 00:06:54.363 "trtype": "$TEST_TRANSPORT", 00:06:54.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.363 "adrfam": "ipv4", 00:06:54.363 "trsvcid": "$NVMF_PORT", 00:06:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.363 "hdgst": ${hdgst:-false}, 00:06:54.363 "ddgst": ${ddgst:-false} 00:06:54.363 }, 00:06:54.363 "method": "bdev_nvme_attach_controller" 00:06:54.363 } 00:06:54.363 EOF 00:06:54.363 )") 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.363 { 00:06:54.363 "params": { 00:06:54.363 "name": "Nvme$subsystem", 00:06:54.363 "trtype": "$TEST_TRANSPORT", 00:06:54.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.363 "adrfam": "ipv4", 00:06:54.363 "trsvcid": "$NVMF_PORT", 00:06:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.363 "hdgst": ${hdgst:-false}, 00:06:54.363 "ddgst": ${ddgst:-false} 00:06:54.363 }, 00:06:54.363 "method": "bdev_nvme_attach_controller" 00:06:54.363 } 00:06:54.363 EOF 00:06:54.363 )") 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2676341 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:54.363 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.363 "params": { 00:06:54.363 "name": "Nvme1", 00:06:54.363 "trtype": "tcp", 00:06:54.363 "traddr": "10.0.0.2", 00:06:54.363 "adrfam": "ipv4", 00:06:54.363 "trsvcid": "4420", 00:06:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:54.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:54.363 "hdgst": false, 00:06:54.363 "ddgst": false 00:06:54.363 }, 00:06:54.363 "method": "bdev_nvme_attach_controller" 00:06:54.363 }' 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.364 "params": { 00:06:54.364 "name": "Nvme1", 00:06:54.364 "trtype": "tcp", 00:06:54.364 "traddr": "10.0.0.2", 00:06:54.364 "adrfam": "ipv4", 00:06:54.364 "trsvcid": "4420", 00:06:54.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:54.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:54.364 "hdgst": false, 00:06:54.364 "ddgst": false 00:06:54.364 }, 00:06:54.364 "method": "bdev_nvme_attach_controller" 00:06:54.364 }' 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.364 "params": { 00:06:54.364 "name": "Nvme1", 00:06:54.364 "trtype": "tcp", 00:06:54.364 "traddr": "10.0.0.2", 00:06:54.364 "adrfam": "ipv4", 00:06:54.364 "trsvcid": "4420", 00:06:54.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:54.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:54.364 "hdgst": false, 00:06:54.364 "ddgst": false 00:06:54.364 }, 00:06:54.364 "method": "bdev_nvme_attach_controller" 00:06:54.364 }' 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:54.364 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.364 "params": { 00:06:54.364 "name": "Nvme1", 00:06:54.364 "trtype": "tcp", 00:06:54.364 "traddr": "10.0.0.2", 00:06:54.364 "adrfam": "ipv4", 00:06:54.364 "trsvcid": "4420", 00:06:54.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:54.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:54.364 "hdgst": false, 00:06:54.364 "ddgst": false 00:06:54.364 }, 00:06:54.364 "method": "bdev_nvme_attach_controller" 00:06:54.364 }' 00:06:54.364 [2024-11-04 16:18:20.974009] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:54.364 [2024-11-04 16:18:20.974009] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:54.364 [2024-11-04 16:18:20.974043] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:54.364 [2024-11-04 16:18:20.974060] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-04 16:18:20.974060] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:06:54.364 --proc-type=auto ] 00:06:54.364 [2024-11-04 16:18:20.974079] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:06:54.364 [2024-11-04 16:18:20.975507] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:06:54.364 [2024-11-04 16:18:20.975549] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:06:54.364 [2024-11-04 16:18:21.166826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.623 [2024-11-04 16:18:21.209479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:06:54.623 [2024-11-04 16:18:21.261703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.623 [2024-11-04 16:18:21.302187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:06:54.623 [2024-11-04 16:18:21.362929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.623 [2024-11-04 16:18:21.418187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.623 [2024-11-04 16:18:21.422098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.882 [2024-11-04 16:18:21.460842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:06:54.882 Running I/O for 1 seconds... 00:06:54.882 Running I/O for 1 seconds... 00:06:54.882 Running I/O for 1 seconds... 00:06:55.140 Running I/O for 1 seconds... 00:06:56.078 13285.00 IOPS, 51.89 MiB/s 00:06:56.078 Latency(us) 00:06:56.078 [2024-11-04T15:18:22.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.078 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:06:56.078 Nvme1n1 : 1.01 13347.26 52.14 0.00 0.00 9561.92 4400.27 16477.62 00:06:56.078 [2024-11-04T15:18:22.902Z] =================================================================================================================== 00:06:56.078 [2024-11-04T15:18:22.902Z] Total : 13347.26 52.14 0.00 0.00 9561.92 4400.27 16477.62 00:06:56.078 10013.00 IOPS, 39.11 MiB/s 00:06:56.078 Latency(us) 00:06:56.078 [2024-11-04T15:18:22.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.078 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:06:56.078 Nvme1n1 : 1.01 10068.02 39.33 0.00 0.00 12664.87 6335.15 20846.69 00:06:56.078 [2024-11-04T15:18:22.902Z] =================================================================================================================== 00:06:56.078 [2024-11-04T15:18:22.902Z] Total : 10068.02 39.33 0.00 0.00 12664.87 6335.15 20846.69 00:06:56.078 252672.00 IOPS, 987.00 MiB/s 00:06:56.078 Latency(us) 00:06:56.078 [2024-11-04T15:18:22.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.078 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:06:56.078 Nvme1n1 : 1.00 252295.25 985.53 0.00 0.00 504.47 224.30 1490.16 00:06:56.078 [2024-11-04T15:18:22.902Z] =================================================================================================================== 00:06:56.078 [2024-11-04T15:18:22.902Z] Total : 252295.25 985.53 0.00 0.00 504.47 224.30 1490.16 00:06:56.078 9867.00 IOPS, 38.54 MiB/s 00:06:56.078 Latency(us) 00:06:56.078 [2024-11-04T15:18:22.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.078 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:06:56.078 Nvme1n1 : 1.01 9951.21 38.87 0.00 0.00 12832.99 3900.95 24841.26 00:06:56.078 [2024-11-04T15:18:22.902Z] =================================================================================================================== 00:06:56.078 [2024-11-04T15:18:22.902Z] Total : 9951.21 38.87 0.00 0.00 12832.99 3900.95 24841.26 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2676343 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2676345 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2676348 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:56.078 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:56.078 rmmod nvme_tcp 00:06:56.078 rmmod nvme_fabrics 00:06:56.078 rmmod nvme_keyring 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2676234 ']' 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2676234 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2676234 ']' 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2676234 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676234 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676234' 00:06:56.337 killing process with pid 2676234 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2676234 00:06:56.337 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2676234 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.337 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:58.873 00:06:58.873 real 0m9.749s 00:06:58.873 user 0m16.061s 00:06:58.873 sys 0m5.532s 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:58.873 ************************************ 00:06:58.873 END TEST nvmf_bdev_io_wait 00:06:58.873 ************************************ 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.873 ************************************ 00:06:58.873 START TEST nvmf_queue_depth 00:06:58.873 ************************************ 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:58.873 * Looking for test storage... 00:06:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:06:58.873 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.874 --rc genhtml_branch_coverage=1 00:06:58.874 --rc genhtml_function_coverage=1 00:06:58.874 --rc genhtml_legend=1 00:06:58.874 --rc geninfo_all_blocks=1 00:06:58.874 --rc geninfo_unexecuted_blocks=1 00:06:58.874 00:06:58.874 ' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.874 --rc genhtml_branch_coverage=1 00:06:58.874 --rc genhtml_function_coverage=1 00:06:58.874 --rc genhtml_legend=1 00:06:58.874 --rc geninfo_all_blocks=1 00:06:58.874 --rc geninfo_unexecuted_blocks=1 00:06:58.874 00:06:58.874 ' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.874 --rc genhtml_branch_coverage=1 00:06:58.874 --rc genhtml_function_coverage=1 00:06:58.874 --rc genhtml_legend=1 00:06:58.874 --rc geninfo_all_blocks=1 00:06:58.874 --rc geninfo_unexecuted_blocks=1 00:06:58.874 00:06:58.874 ' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.874 --rc genhtml_branch_coverage=1 00:06:58.874 --rc genhtml_function_coverage=1 00:06:58.874 --rc genhtml_legend=1 00:06:58.874 --rc geninfo_all_blocks=1 00:06:58.874 --rc geninfo_unexecuted_blocks=1 00:06:58.874 00:06:58.874 ' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.874 16:18:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.142 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:04.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:04.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:04.143 Found net devices under 0000:86:00.0: cvl_0_0 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:04.143 Found net devices under 0000:86:00.1: cvl_0_1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:07:04.143 00:07:04.143 --- 10.0.0.2 ping statistics --- 00:07:04.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.143 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:04.143 00:07:04.143 --- 10.0.0.1 ping statistics --- 00:07:04.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.143 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.143 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2680129 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2680129 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2680129 ']' 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.144 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.144 [2024-11-04 16:18:30.960470] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:07:04.144 [2024-11-04 16:18:30.960513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.404 [2024-11-04 16:18:31.029917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.404 [2024-11-04 16:18:31.070838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.404 [2024-11-04 16:18:31.070876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.404 [2024-11-04 16:18:31.070883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.404 [2024-11-04 16:18:31.070889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.404 [2024-11-04 16:18:31.070895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.404 [2024-11-04 16:18:31.071451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.404 [2024-11-04 16:18:31.206513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.404 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.404 Malloc0 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.664 [2024-11-04 16:18:31.248750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2680162 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2680162 /var/tmp/bdevperf.sock 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2680162 ']' 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.664 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:04.664 [2024-11-04 16:18:31.298643] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:07:04.665 [2024-11-04 16:18:31.298687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680162 ] 00:07:04.665 [2024-11-04 16:18:31.361505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.665 [2024-11-04 16:18:31.402049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.665 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.665 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:04.665 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:04.665 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.665 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:04.923 NVMe0n1 00:07:04.923 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.923 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:05.182 Running I/O for 10 seconds... 00:07:07.052 12061.00 IOPS, 47.11 MiB/s [2024-11-04T15:18:35.252Z] 12262.50 IOPS, 47.90 MiB/s [2024-11-04T15:18:36.187Z] 12282.00 IOPS, 47.98 MiB/s [2024-11-04T15:18:37.122Z] 12333.75 IOPS, 48.18 MiB/s [2024-11-04T15:18:38.059Z] 12462.80 IOPS, 48.68 MiB/s [2024-11-04T15:18:39.123Z] 12447.83 IOPS, 48.62 MiB/s [2024-11-04T15:18:40.076Z] 12427.71 IOPS, 48.55 MiB/s [2024-11-04T15:18:41.008Z] 12474.75 IOPS, 48.73 MiB/s [2024-11-04T15:18:41.941Z] 12483.11 IOPS, 48.76 MiB/s [2024-11-04T15:18:41.941Z] 12476.50 IOPS, 48.74 MiB/s 00:07:15.117 Latency(us) 00:07:15.117 [2024-11-04T15:18:41.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.117 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:15.117 Verification LBA range: start 0x0 length 0x4000 00:07:15.117 NVMe0n1 : 10.06 12503.28 48.84 0.00 0.00 81648.78 18724.57 52928.12 00:07:15.117 [2024-11-04T15:18:41.941Z] =================================================================================================================== 00:07:15.117 [2024-11-04T15:18:41.941Z] Total : 12503.28 48.84 0.00 0.00 81648.78 18724.57 52928.12 00:07:15.117 { 00:07:15.117 "results": [ 00:07:15.117 { 00:07:15.117 "job": "NVMe0n1", 00:07:15.117 "core_mask": "0x1", 00:07:15.117 "workload": "verify", 00:07:15.117 "status": "finished", 00:07:15.117 "verify_range": { 00:07:15.117 "start": 0, 00:07:15.117 "length": 16384 00:07:15.117 }, 00:07:15.117 "queue_depth": 1024, 00:07:15.117 "io_size": 4096, 00:07:15.117 "runtime": 10.059922, 00:07:15.117 "iops": 12503.277858416795, 00:07:15.117 "mibps": 48.84092913444061, 00:07:15.117 "io_failed": 0, 00:07:15.117 "io_timeout": 0, 00:07:15.117 "avg_latency_us": 81648.78137160969, 00:07:15.117 "min_latency_us": 18724.571428571428, 00:07:15.117 "max_latency_us": 52928.1219047619 00:07:15.117 } 00:07:15.117 ], 00:07:15.117 "core_count": 1 00:07:15.117 } 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2680162 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2680162 ']' 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2680162 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.117 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680162 00:07:15.375 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.375 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.375 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680162' 00:07:15.375 killing process with pid 2680162 00:07:15.375 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2680162 00:07:15.375 Received shutdown signal, test time was about 10.000000 seconds 00:07:15.375 00:07:15.375 Latency(us) 00:07:15.375 [2024-11-04T15:18:42.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.375 [2024-11-04T15:18:42.199Z] =================================================================================================================== 00:07:15.375 [2024-11-04T15:18:42.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:15.375 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2680162 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.375 rmmod nvme_tcp 00:07:15.375 rmmod nvme_fabrics 00:07:15.375 rmmod nvme_keyring 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2680129 ']' 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2680129 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2680129 ']' 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2680129 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.375 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680129 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680129' 00:07:15.633 killing process with pid 2680129 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2680129 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2680129 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.633 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.163 00:07:18.163 real 0m19.238s 00:07:18.163 user 0m22.939s 00:07:18.163 sys 0m5.704s 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:18.163 ************************************ 00:07:18.163 END TEST nvmf_queue_depth 00:07:18.163 ************************************ 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.163 ************************************ 00:07:18.163 START TEST nvmf_target_multipath 00:07:18.163 ************************************ 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:18.163 * Looking for test storage... 00:07:18.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.163 --rc genhtml_branch_coverage=1 00:07:18.163 --rc genhtml_function_coverage=1 00:07:18.163 --rc genhtml_legend=1 00:07:18.163 --rc geninfo_all_blocks=1 00:07:18.163 --rc geninfo_unexecuted_blocks=1 00:07:18.163 00:07:18.163 ' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.163 --rc genhtml_branch_coverage=1 00:07:18.163 --rc genhtml_function_coverage=1 00:07:18.163 --rc genhtml_legend=1 00:07:18.163 --rc geninfo_all_blocks=1 00:07:18.163 --rc geninfo_unexecuted_blocks=1 00:07:18.163 00:07:18.163 ' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.163 --rc genhtml_branch_coverage=1 00:07:18.163 --rc genhtml_function_coverage=1 00:07:18.163 --rc genhtml_legend=1 00:07:18.163 --rc geninfo_all_blocks=1 00:07:18.163 --rc geninfo_unexecuted_blocks=1 00:07:18.163 00:07:18.163 ' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.163 --rc genhtml_branch_coverage=1 00:07:18.163 --rc genhtml_function_coverage=1 00:07:18.163 --rc genhtml_legend=1 00:07:18.163 --rc geninfo_all_blocks=1 00:07:18.163 --rc geninfo_unexecuted_blocks=1 00:07:18.163 00:07:18.163 ' 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.163 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.164 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.423 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.424 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.424 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.424 16:18:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:07:23.424 00:07:23.424 --- 10.0.0.2 ping statistics --- 00:07:23.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.424 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:23.424 00:07:23.424 --- 10.0.0.1 ping statistics --- 00:07:23.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.424 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:23.424 only one NIC for nvmf test 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.424 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.424 rmmod nvme_tcp 00:07:23.424 rmmod nvme_fabrics 00:07:23.424 rmmod nvme_keyring 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.682 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:25.583 00:07:25.583 real 0m7.820s 00:07:25.583 user 0m1.647s 00:07:25.583 sys 0m4.090s 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.583 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:25.583 ************************************ 00:07:25.583 END TEST nvmf_target_multipath 00:07:25.583 ************************************ 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.841 ************************************ 00:07:25.841 START TEST nvmf_zcopy 00:07:25.841 ************************************ 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:25.841 * Looking for test storage... 00:07:25.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.841 --rc genhtml_branch_coverage=1 00:07:25.841 --rc genhtml_function_coverage=1 00:07:25.841 --rc genhtml_legend=1 00:07:25.841 --rc geninfo_all_blocks=1 00:07:25.841 --rc geninfo_unexecuted_blocks=1 00:07:25.841 00:07:25.841 ' 00:07:25.841 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.841 --rc genhtml_branch_coverage=1 00:07:25.841 --rc genhtml_function_coverage=1 00:07:25.842 --rc genhtml_legend=1 00:07:25.842 --rc geninfo_all_blocks=1 00:07:25.842 --rc geninfo_unexecuted_blocks=1 00:07:25.842 00:07:25.842 ' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.842 --rc genhtml_branch_coverage=1 00:07:25.842 --rc genhtml_function_coverage=1 00:07:25.842 --rc genhtml_legend=1 00:07:25.842 --rc geninfo_all_blocks=1 00:07:25.842 --rc geninfo_unexecuted_blocks=1 00:07:25.842 00:07:25.842 ' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.842 --rc genhtml_branch_coverage=1 00:07:25.842 --rc genhtml_function_coverage=1 00:07:25.842 --rc genhtml_legend=1 00:07:25.842 --rc geninfo_all_blocks=1 00:07:25.842 --rc geninfo_unexecuted_blocks=1 00:07:25.842 00:07:25.842 ' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:25.842 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:31.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:31.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:31.105 Found net devices under 0000:86:00.0: cvl_0_0 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:31.105 Found net devices under 0000:86:00.1: cvl_0_1 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.105 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.106 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:07:31.364 00:07:31.364 --- 10.0.0.2 ping statistics --- 00:07:31.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.364 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:31.364 00:07:31.364 --- 10.0.0.1 ping statistics --- 00:07:31.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.364 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.364 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2689048 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2689048 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2689048 ']' 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.364 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.365 [2024-11-04 16:18:58.079008] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:07:31.365 [2024-11-04 16:18:58.079052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.365 [2024-11-04 16:18:58.145769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.365 [2024-11-04 16:18:58.183680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.365 [2024-11-04 16:18:58.183719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.365 [2024-11-04 16:18:58.183728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.365 [2024-11-04 16:18:58.183734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.365 [2024-11-04 16:18:58.183738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.365 [2024-11-04 16:18:58.184291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 [2024-11-04 16:18:58.314990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 [2024-11-04 16:18:58.331181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 malloc0 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.622 { 00:07:31.622 "params": { 00:07:31.622 "name": "Nvme$subsystem", 00:07:31.622 "trtype": "$TEST_TRANSPORT", 00:07:31.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.622 "adrfam": "ipv4", 00:07:31.622 "trsvcid": "$NVMF_PORT", 00:07:31.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.622 "hdgst": ${hdgst:-false}, 00:07:31.622 "ddgst": ${ddgst:-false} 00:07:31.622 }, 00:07:31.622 "method": "bdev_nvme_attach_controller" 00:07:31.622 } 00:07:31.622 EOF 00:07:31.622 )") 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:31.622 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.622 "params": { 00:07:31.622 "name": "Nvme1", 00:07:31.622 "trtype": "tcp", 00:07:31.622 "traddr": "10.0.0.2", 00:07:31.622 "adrfam": "ipv4", 00:07:31.622 "trsvcid": "4420", 00:07:31.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.622 "hdgst": false, 00:07:31.622 "ddgst": false 00:07:31.622 }, 00:07:31.622 "method": "bdev_nvme_attach_controller" 00:07:31.622 }' 00:07:31.622 [2024-11-04 16:18:58.408737] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:07:31.622 [2024-11-04 16:18:58.408779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689071 ] 00:07:31.880 [2024-11-04 16:18:58.471094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.880 [2024-11-04 16:18:58.511768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.138 Running I/O for 10 seconds... 00:07:34.007 8558.00 IOPS, 66.86 MiB/s [2024-11-04T15:19:02.207Z] 8643.00 IOPS, 67.52 MiB/s [2024-11-04T15:19:03.143Z] 8708.33 IOPS, 68.03 MiB/s [2024-11-04T15:19:04.079Z] 8755.00 IOPS, 68.40 MiB/s [2024-11-04T15:19:05.015Z] 8777.60 IOPS, 68.58 MiB/s [2024-11-04T15:19:05.951Z] 8790.83 IOPS, 68.68 MiB/s [2024-11-04T15:19:06.886Z] 8810.57 IOPS, 68.83 MiB/s [2024-11-04T15:19:07.822Z] 8824.12 IOPS, 68.94 MiB/s [2024-11-04T15:19:09.199Z] 8827.67 IOPS, 68.97 MiB/s [2024-11-04T15:19:09.199Z] 8832.70 IOPS, 69.01 MiB/s 00:07:42.375 Latency(us) 00:07:42.375 [2024-11-04T15:19:09.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.375 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:42.375 Verification LBA range: start 0x0 length 0x1000 00:07:42.375 Nvme1n1 : 10.01 8833.28 69.01 0.00 0.00 14448.50 1053.26 21096.35 00:07:42.375 [2024-11-04T15:19:09.199Z] =================================================================================================================== 00:07:42.375 [2024-11-04T15:19:09.199Z] Total : 8833.28 69.01 0.00 0.00 14448.50 1053.26 21096.35 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2690900 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:42.375 { 00:07:42.375 "params": { 00:07:42.375 "name": "Nvme$subsystem", 00:07:42.375 "trtype": "$TEST_TRANSPORT", 00:07:42.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.375 "adrfam": "ipv4", 00:07:42.375 "trsvcid": "$NVMF_PORT", 00:07:42.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.375 "hdgst": ${hdgst:-false}, 00:07:42.375 "ddgst": ${ddgst:-false} 00:07:42.375 }, 00:07:42.375 "method": "bdev_nvme_attach_controller" 00:07:42.375 } 00:07:42.375 EOF 00:07:42.375 )") 00:07:42.375 [2024-11-04 16:19:08.986670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.375 [2024-11-04 16:19:08.986705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:42.375 [2024-11-04 16:19:08.994656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.375 [2024-11-04 16:19:08.994670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:42.375 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:42.375 "params": { 00:07:42.375 "name": "Nvme1", 00:07:42.375 "trtype": "tcp", 00:07:42.375 "traddr": "10.0.0.2", 00:07:42.375 "adrfam": "ipv4", 00:07:42.375 "trsvcid": "4420", 00:07:42.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:42.375 "hdgst": false, 00:07:42.375 "ddgst": false 00:07:42.375 }, 00:07:42.376 "method": "bdev_nvme_attach_controller" 00:07:42.376 }' 00:07:42.376 [2024-11-04 16:19:09.002669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.002680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.010689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.010698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.018708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.018717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.026732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.026741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.028332] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:07:42.376 [2024-11-04 16:19:09.028371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690900 ] 00:07:42.376 [2024-11-04 16:19:09.034754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.034764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.042777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.042788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.050796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.050806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.058818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.058828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.066841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.066850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.074863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.074872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.082882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.082891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.090905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.090914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.091609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.376 [2024-11-04 16:19:09.098928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.098939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.106955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.106973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.114968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.114977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.122990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.122999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.131011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.131022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.133222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.376 [2024-11-04 16:19:09.139033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.139044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.147066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.147082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.155079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.155095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.163099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.163112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.171118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.171129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.179139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.179151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.187161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.187172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.376 [2024-11-04 16:19:09.195185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.376 [2024-11-04 16:19:09.195197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.203204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.203213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.211224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.211232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.219244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.219253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.227282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.227301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.235294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.235308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.243317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.243329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.251338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.251350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.259357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.259366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.267380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.267389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.275402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.275410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.283421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.283429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.291450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.291463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.299469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.299481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.307491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.307505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.315509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.315518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.363873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.363891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.371670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.371681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 Running I/O for 5 seconds... 00:07:42.635 [2024-11-04 16:19:09.379689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.379699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.391338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.391360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.400059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.400082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.409071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.409089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.418128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.418151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.427516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.427534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.436642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.436659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.445846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.445864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.635 [2024-11-04 16:19:09.454989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.635 [2024-11-04 16:19:09.455007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.464071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.464089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.473364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.473381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.482725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.482743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.491292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.491310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.499983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.500001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.508834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.508852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.518315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.518334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.528238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.528257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.536963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.536981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.546231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.546250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.555237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.555255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.565017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.565035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.573573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.573592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.582763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.582782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.592063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.592085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.601808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.601827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.610610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.610628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.619945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.619963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.629137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.629155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.638752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.638771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.648019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.648038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.656929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.656948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.665598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.665621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.674598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.674621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.683764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.683784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.693062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.693081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.701640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.701658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.710624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.710643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:42.896 [2024-11-04 16:19:09.719913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:42.896 [2024-11-04 16:19:09.719933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.729590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.729616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.739199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.739219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.747869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.747888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.756907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.756929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.766108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.766132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.775427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.775447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.784560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.784579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.793124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.793144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.802249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.802268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.811333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.811351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.820589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.820615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.829982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.830001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.838559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.838578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.847764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.847783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.856755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.856774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.865750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.865769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.875016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.875035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.170 [2024-11-04 16:19:09.884802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.170 [2024-11-04 16:19:09.884821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.893998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.894016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.902621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.902639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.911752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.911771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.920922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.920941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.929910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.929930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.939179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.939199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.948869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.948888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.958190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.958209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.966774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.966794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.975886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.975905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.985126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.985145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.171 [2024-11-04 16:19:09.994502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.171 [2024-11-04 16:19:09.994522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.003691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.003711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.014579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.014611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.023346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.023366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.032761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.032782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.043446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.043467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.052104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.052124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.061261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.061280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.070397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.070417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.078625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.078649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.088738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.088760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.098333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.098354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.107171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.107191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.116701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.116720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.125265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.125285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.134458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.134477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.143842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.143861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.152367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.152389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.161722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.161743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.170930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.170949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.179622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.179640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.189303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.189322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.199224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.199243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.429 [2024-11-04 16:19:10.207773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.429 [2024-11-04 16:19:10.207792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.430 [2024-11-04 16:19:10.216861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.430 [2024-11-04 16:19:10.216881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.430 [2024-11-04 16:19:10.226181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.430 [2024-11-04 16:19:10.226201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.430 [2024-11-04 16:19:10.235402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.430 [2024-11-04 16:19:10.235421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.430 [2024-11-04 16:19:10.244455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.430 [2024-11-04 16:19:10.244474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.688 [2024-11-04 16:19:10.254117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.688 [2024-11-04 16:19:10.254136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.262903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.262922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.272226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.272245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.281409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.281428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.290744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.290763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.299847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.299866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.309028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.309047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.318371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.318390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.327836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.327855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.337150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.337173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.345695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.345714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.354411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.354430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.363523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.363542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.372746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.372765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 16662.00 IOPS, 130.17 MiB/s [2024-11-04T15:19:10.513Z] [2024-11-04 16:19:10.382152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.382172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.391589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.391614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.400615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.400634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.409726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.409746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.419013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.419032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.428215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.428234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.437030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.437049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.445653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.445672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.455446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.455469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.464241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.464261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.473670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.473692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.482801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.482820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.492744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.492763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.501496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.501516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.689 [2024-11-04 16:19:10.510799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.689 [2024-11-04 16:19:10.510818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.519982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.520001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.529473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.529493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.538707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.538726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.548095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.548115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.556865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.556885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.566562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.566581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.575390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.575409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.584461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.584481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.593944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.593962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.603452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.603473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.613279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.613298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.621901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.621919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.631288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.631310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.640294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.640312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.649381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.649399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.658569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.658586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.667271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.667289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.676319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.676336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.685492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.685510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.694820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.694837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.703412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.703430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.712829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.712847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.721841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.721861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.731044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.731065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.740852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.740871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.749644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.749662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.758617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.758651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:43.948 [2024-11-04 16:19:10.768267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:43.948 [2024-11-04 16:19:10.768285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.776900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.776918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.786079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.786098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.795348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.795366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.804430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.804451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.813071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.813090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.821715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.821733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.830487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.207 [2024-11-04 16:19:10.830504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.207 [2024-11-04 16:19:10.839184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.839202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.848284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.848302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.857360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.857378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.866381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.866398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.875987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.876006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.884395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.884413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.893474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.893491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.902663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.902681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.911870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.911889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.920946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.920964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.930233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.930252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.938745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.938763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.948344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.948363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.956947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.956964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.966152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.966170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.973110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.973131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.984067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.984085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:10.992625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:10.992644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:11.001669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:11.001687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:11.011355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:11.011373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:11.019978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:11.019996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.208 [2024-11-04 16:19:11.029065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.208 [2024-11-04 16:19:11.029083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.038274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.038293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.047832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.047850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.056406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.056424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.065973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.065991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.074562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.074581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.083866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.083884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.092245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.092263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.101739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.101758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.110433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.110453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.120124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.120144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.129216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.129235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.138840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.138861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.147599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.147624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.156898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.156918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.166021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.166041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.175293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.175312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.184329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.184349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.192940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.192959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.201423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.201442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.210552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.210571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.219923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.219942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.229217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.229236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.238474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.238494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.247663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.247682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.256859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.256878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.266487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.266506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.275077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.275097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.467 [2024-11-04 16:19:11.284022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.467 [2024-11-04 16:19:11.284041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.292913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.292931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.301430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.301449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.310596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.310622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.319884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.319904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.329270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.329289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.338437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.338455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.348078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.348096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.356752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.356770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.365395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.365413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.374999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.375018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 16788.00 IOPS, 131.16 MiB/s [2024-11-04T15:19:11.551Z] [2024-11-04 16:19:11.383517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.383535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.392478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.392497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.401550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.401569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.410607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.410626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.419908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.419927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.428979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.428997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.437563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.437581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.446796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.446815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.456475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.456494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.727 [2024-11-04 16:19:11.465611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.727 [2024-11-04 16:19:11.465629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.474699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.474718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.483851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.483875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.493467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.493486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.502171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.502190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.511663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.511681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.521387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.521405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.530482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.530501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.539185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.539202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.728 [2024-11-04 16:19:11.548293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.728 [2024-11-04 16:19:11.548310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.557487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.557505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.566548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.566566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.575660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.575678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.584900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.584918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.593921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.593940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.603162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.603180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.612493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.612511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.622123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.622140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.630811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.630828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.640063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.640081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.648889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.648908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.658118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.658141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.666789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.666808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.675822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.675840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.684931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.684950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.694111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.694129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.703150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.703169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.712417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.712435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.721365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.721384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.730519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.730538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.739707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.739725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.748960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.986 [2024-11-04 16:19:11.748978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.986 [2024-11-04 16:19:11.757418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.757436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.987 [2024-11-04 16:19:11.765933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.765952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.987 [2024-11-04 16:19:11.775115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.775133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.987 [2024-11-04 16:19:11.784622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.784641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.987 [2024-11-04 16:19:11.793635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.793653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.987 [2024-11-04 16:19:11.802944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.987 [2024-11-04 16:19:11.802963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.812170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.812189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.820706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.820726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.829895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.829918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.838951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.838969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.848299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.848319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.244 [2024-11-04 16:19:11.857591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.244 [2024-11-04 16:19:11.857615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.866096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.866114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.875268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.875286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.884559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.884578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.893319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.893339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.901950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.901969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.911173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.911191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.920157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.920175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.929199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.929218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.938101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.938120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.947203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.947221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.955933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.955952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.964430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.964448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.973616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.973649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.982636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.982654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:11.991993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:11.992012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.000514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.000536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.009645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.009664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.018121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.018140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.027806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.027825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.036437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.036455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.045097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.045115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.053905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.053924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.245 [2024-11-04 16:19:12.063275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.245 [2024-11-04 16:19:12.063296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.072404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.072424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.081552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.081571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.090581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.090605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.099690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.099709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.108939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.108958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.117570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.117589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.126840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.126860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.135501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.135520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.144640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.144658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.153526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.153543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.162764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.162783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.171983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.172009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.181147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.181165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.190126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.190144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.199028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.199046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.207999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.208018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.217109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.217129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.225630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.225649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.234569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.234587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.243683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.243702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.252886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.252905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.261414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.261432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.270573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.270591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.279686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.279705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.288676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.288694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.297884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.297903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.306817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.306835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.316102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.316120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.504 [2024-11-04 16:19:12.324726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.504 [2024-11-04 16:19:12.324744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.333939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.333959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.342528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.342546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.351480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.351498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.360479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.360498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.369680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.369699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.379431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.379451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 16841.33 IOPS, 131.57 MiB/s [2024-11-04T15:19:12.587Z] [2024-11-04 16:19:12.387947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.387966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.397021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.397040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.405985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.406003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.415186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.415205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.424281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.424299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.433910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.433929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.442575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.442594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.451012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.451031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.459580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.459597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.468273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.468291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.476778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.476795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.486206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.486226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.494797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.494817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.503856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.503875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.513534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.513553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.522948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.522968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.531545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.531565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.540678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.540699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.549871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.549890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.559159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.559178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.568874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.568893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.577675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.577695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.763 [2024-11-04 16:19:12.586278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.763 [2024-11-04 16:19:12.586298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.595429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.595447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.604787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.604806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.613903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.613922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.623842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.623861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.632969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.632987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.641863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.641883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.651766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.651785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.661083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.661102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.670131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.670149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.679184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.679207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.688287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.688306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.696831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.696850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.705932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.705952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.714789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.714808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.723838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.723857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.732794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.732812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.742073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.742092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.751321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.751339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.760844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.760863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.769396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.769415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.778385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.778403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.787492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.787511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.796789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.796809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.805416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.805435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.814544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.814563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.823832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.823850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.833100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.833119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.021 [2024-11-04 16:19:12.841826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.021 [2024-11-04 16:19:12.841845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.279 [2024-11-04 16:19:12.851657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.851681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.860898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.860928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.869610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.869629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.878976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.878994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.888096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.888115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.897069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.897088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.905983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.906002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.914466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.914485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.923039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.923057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.932072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.932090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.941166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.941184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.950408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.950426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.958835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.958854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.967830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.967848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.977154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.977172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.986793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.986811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:12.995246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:12.995265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.003653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.003672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.012643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.012661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.022343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.022366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.031194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.031212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.040972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.040991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.049623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.049642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.058643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.058661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.067876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.067895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.076981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.077000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.085726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.085745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.280 [2024-11-04 16:19:13.094360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.280 [2024-11-04 16:19:13.094382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.104138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.104156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.113017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.113035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.122222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.122240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.131803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.131821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.140628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.140647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.149674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.149693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.158944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.158962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.168016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.168033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.176937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.176955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.186097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.186115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.194811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.194833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.203956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.203974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.212455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.212473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.221459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.221478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.230821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.230840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.239991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.240011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.249180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.249198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.258823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.258841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.267340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.267358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.276443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.276460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.284997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.285014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.294013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.294031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.303224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.303243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.312318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.312336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.321493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.321510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.330520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.330538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.340248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.340266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.348951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.348969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.540 [2024-11-04 16:19:13.358117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.540 [2024-11-04 16:19:13.358136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.367254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.367276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.376318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.376337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.385212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.385231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 16878.75 IOPS, 131.87 MiB/s [2024-11-04T15:19:13.623Z] [2024-11-04 16:19:13.394584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.394609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.403830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.403850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.413628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.413651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.422364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.422383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.431531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.431550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.448981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.449001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.457608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.457626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.466885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.466903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.475337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.475355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.483938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.483957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.493325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.493343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.502558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.502576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.511675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.511693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.520681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.520699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.529746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.529765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.539178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.539196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.548236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.548254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.557297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.557316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.566466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.566484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.575405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.575423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.584425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.584444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.594044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.594063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.603273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.603292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.611712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.799 [2024-11-04 16:19:13.611730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.799 [2024-11-04 16:19:13.620699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.800 [2024-11-04 16:19:13.620717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.058 [2024-11-04 16:19:13.630306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.630324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.638695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.638713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.648405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.648424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.657552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.657571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.666588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.666612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.676239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.676257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.685369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.685387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.693961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.693979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.703366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.703400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.711964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.711981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.721112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.721130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.730329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.730347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.739354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.739372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.748597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.748621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.757634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.757652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.766939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.766957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.775547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.775565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.784587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.784612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.793554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.793572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.802764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.802782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.812078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.812096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.821247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.821265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.830260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.830279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.839019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.839037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.848170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.848188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.857445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.857464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.866636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.866655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.059 [2024-11-04 16:19:13.875941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.059 [2024-11-04 16:19:13.875960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.884622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.884645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.893716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.893735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.902988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.903008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.912097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.912117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.921828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.921847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.931005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.931024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.939614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.939633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.948634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.948652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.958299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.958317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.966915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.966932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.975772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.975790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.985056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.985074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:13.994082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:13.994101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.003083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.003102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.012662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.012680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.021337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.021355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.030961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.030980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.040336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.040354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.049081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.049100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.057886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.057908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.067080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.067099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.076364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.076383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.085293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.085312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.093955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.093974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.103247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.103266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.112339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.112358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.121448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.121467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.130592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.130618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.318 [2024-11-04 16:19:14.139797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.318 [2024-11-04 16:19:14.139816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.148977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.148997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.157586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.157610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.166700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.166720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.175770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.175788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.185359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.185377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.193949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.193967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.203061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.203080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.212027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.212045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.220486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.220504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.229525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.229548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.238526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.238545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.247072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.247090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.256045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.256064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.265287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.265306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.274521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.274541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.283231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.283249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.291766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.291783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.300836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.300854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.310594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.310618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.319202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.319220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.326056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.326078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.336839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.336857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.346217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.346236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.355386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.355405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.364710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.364729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.373866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.373884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 [2024-11-04 16:19:14.382855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.382873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 16917.40 IOPS, 132.17 MiB/s [2024-11-04T15:19:14.402Z] [2024-11-04 16:19:14.391818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.391837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.578 00:07:47.578 Latency(us) 00:07:47.578 [2024-11-04T15:19:14.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.578 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:47.578 Nvme1n1 : 5.01 16918.65 132.18 0.00 0.00 7558.44 3136.37 15416.56 00:07:47.578 [2024-11-04T15:19:14.402Z] =================================================================================================================== 00:07:47.578 [2024-11-04T15:19:14.402Z] Total : 16918.65 132.18 0.00 0.00 7558.44 3136.37 15416.56 00:07:47.578 [2024-11-04 16:19:14.397894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.578 [2024-11-04 16:19:14.397912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.837 [2024-11-04 16:19:14.405912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.405926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.413931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.413942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.421961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.421978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.429980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.429998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.437995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.438008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.446022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.446035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.454038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.454048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.462058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.462069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.470080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.470092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.478101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.478113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.486129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.486143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.494152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.494165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.502171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.502184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.510192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.510201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.518213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.518224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.526241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.526254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.534260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.534271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.542278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.542288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 [2024-11-04 16:19:14.550302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.838 [2024-11-04 16:19:14.550312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2690900) - No such process 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2690900 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.838 delay0 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.838 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:48.096 [2024-11-04 16:19:14.714736] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:54.658 [2024-11-04 16:19:21.450088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa82850 is same with the state(6) to be set 00:07:54.658 Initializing NVMe Controllers 00:07:54.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:54.658 Initialization complete. Launching workers. 00:07:54.658 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5422 00:07:54.658 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5705, failed to submit 37 00:07:54.658 success 5530, unsuccessful 175, failed 0 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.658 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.658 rmmod nvme_tcp 00:07:54.658 rmmod nvme_fabrics 00:07:54.917 rmmod nvme_keyring 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2689048 ']' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2689048 ']' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689048' 00:07:54.917 killing process with pid 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2689048 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.917 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.453 00:07:57.453 real 0m31.326s 00:07:57.453 user 0m42.189s 00:07:57.453 sys 0m10.822s 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 ************************************ 00:07:57.453 END TEST nvmf_zcopy 00:07:57.453 ************************************ 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 ************************************ 00:07:57.453 START TEST nvmf_nmic 00:07:57.453 ************************************ 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:57.453 * Looking for test storage... 00:07:57.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.453 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.453 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.454 --rc genhtml_branch_coverage=1 00:07:57.454 --rc genhtml_function_coverage=1 00:07:57.454 --rc genhtml_legend=1 00:07:57.454 --rc geninfo_all_blocks=1 00:07:57.454 --rc geninfo_unexecuted_blocks=1 00:07:57.454 00:07:57.454 ' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.454 --rc genhtml_branch_coverage=1 00:07:57.454 --rc genhtml_function_coverage=1 00:07:57.454 --rc genhtml_legend=1 00:07:57.454 --rc geninfo_all_blocks=1 00:07:57.454 --rc geninfo_unexecuted_blocks=1 00:07:57.454 00:07:57.454 ' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.454 --rc genhtml_branch_coverage=1 00:07:57.454 --rc genhtml_function_coverage=1 00:07:57.454 --rc genhtml_legend=1 00:07:57.454 --rc geninfo_all_blocks=1 00:07:57.454 --rc geninfo_unexecuted_blocks=1 00:07:57.454 00:07:57.454 ' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.454 --rc genhtml_branch_coverage=1 00:07:57.454 --rc genhtml_function_coverage=1 00:07:57.454 --rc genhtml_legend=1 00:07:57.454 --rc geninfo_all_blocks=1 00:07:57.454 --rc geninfo_unexecuted_blocks=1 00:07:57.454 00:07:57.454 ' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.454 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.455 16:19:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.723 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.723 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.723 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:08:02.981 00:08:02.981 --- 10.0.0.2 ping statistics --- 00:08:02.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.981 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:02.981 00:08:02.981 --- 10.0.0.1 ping statistics --- 00:08:02.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.981 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2696498 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2696498 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2696498 ']' 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.981 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:02.981 [2024-11-04 16:19:29.692647] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:08:02.981 [2024-11-04 16:19:29.692692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.981 [2024-11-04 16:19:29.761670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.981 [2024-11-04 16:19:29.805637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.981 [2024-11-04 16:19:29.805673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.981 [2024-11-04 16:19:29.805681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.981 [2024-11-04 16:19:29.805688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.981 [2024-11-04 16:19:29.805693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.240 [2024-11-04 16:19:29.807257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.240 [2024-11-04 16:19:29.807352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.240 [2024-11-04 16:19:29.807437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.240 [2024-11-04 16:19:29.807439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 [2024-11-04 16:19:29.952066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 Malloc0 00:08:03.240 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 [2024-11-04 16:19:30.028018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:03.240 test case1: single bdev can't be used in multiple subsystems 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.240 [2024-11-04 16:19:30.055940] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:03.240 [2024-11-04 16:19:30.055964] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:03.240 [2024-11-04 16:19:30.055972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.240 request: 00:08:03.240 { 00:08:03.240 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:03.240 "namespace": { 00:08:03.240 "bdev_name": "Malloc0", 00:08:03.240 "no_auto_visible": false 00:08:03.240 }, 00:08:03.240 "method": "nvmf_subsystem_add_ns", 00:08:03.240 "req_id": 1 00:08:03.240 } 00:08:03.240 Got JSON-RPC error response 00:08:03.240 response: 00:08:03.240 { 00:08:03.240 "code": -32602, 00:08:03.240 "message": "Invalid parameters" 00:08:03.240 } 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:03.240 Adding namespace failed - expected result. 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:03.240 test case2: host connect to nvmf target in multiple paths 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:03.240 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.499 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.499 [2024-11-04 16:19:30.068106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:03.499 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.499 16:19:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.433 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:05.808 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.808 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:05.808 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.808 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:05.808 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:07.857 16:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:07.857 [global] 00:08:07.857 thread=1 00:08:07.857 invalidate=1 00:08:07.857 rw=write 00:08:07.857 time_based=1 00:08:07.857 runtime=1 00:08:07.857 ioengine=libaio 00:08:07.857 direct=1 00:08:07.857 bs=4096 00:08:07.857 iodepth=1 00:08:07.857 norandommap=0 00:08:07.857 numjobs=1 00:08:07.857 00:08:07.857 verify_dump=1 00:08:07.857 verify_backlog=512 00:08:07.857 verify_state_save=0 00:08:07.857 do_verify=1 00:08:07.857 verify=crc32c-intel 00:08:07.857 [job0] 00:08:07.857 filename=/dev/nvme0n1 00:08:07.857 Could not set queue depth (nvme0n1) 00:08:08.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:08.114 fio-3.35 00:08:08.114 Starting 1 thread 00:08:09.045 00:08:09.045 job0: (groupid=0, jobs=1): err= 0: pid=2697446: Mon Nov 4 16:19:35 2024 00:08:09.045 read: IOPS=2194, BW=8779KiB/s (8990kB/s)(8788KiB/1001msec) 00:08:09.045 slat (nsec): min=6616, max=30616, avg=7518.61, stdev=973.45 00:08:09.045 clat (usec): min=185, max=395, avg=235.69, stdev=20.18 00:08:09.045 lat (usec): min=192, max=403, avg=243.21, stdev=20.15 00:08:09.045 clat percentiles (usec): 00:08:09.045 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:08:09.045 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:08:09.045 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:08:09.045 | 99.00th=[ 285], 99.50th=[ 285], 99.90th=[ 322], 99.95th=[ 330], 00:08:09.045 | 99.99th=[ 396] 00:08:09.045 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:09.045 slat (usec): min=9, max=28507, avg=21.85, stdev=563.22 00:08:09.045 clat (usec): min=108, max=405, avg=156.31, stdev=42.04 00:08:09.045 lat (usec): min=121, max=28787, avg=178.16, stdev=567.23 00:08:09.045 clat percentiles (usec): 00:08:09.045 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 125], 00:08:09.045 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 149], 00:08:09.045 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 241], 95.00th=[ 245], 00:08:09.045 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 375], 00:08:09.045 | 99.99th=[ 404] 00:08:09.045 bw ( KiB/s): min= 9296, max= 9296, per=90.87%, avg=9296.00, stdev= 0.00, samples=1 00:08:09.045 iops : min= 2324, max= 2324, avg=2324.00, stdev= 0.00, samples=1 00:08:09.045 lat (usec) : 250=86.21%, 500=13.79% 00:08:09.045 cpu : usr=2.50%, sys=4.30%, ctx=4759, majf=0, minf=1 00:08:09.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:09.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:09.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:09.045 issued rwts: total=2197,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:09.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:09.045 00:08:09.045 Run status group 0 (all jobs): 00:08:09.045 READ: bw=8779KiB/s (8990kB/s), 8779KiB/s-8779KiB/s (8990kB/s-8990kB/s), io=8788KiB (8999kB), run=1001-1001msec 00:08:09.045 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:09.045 00:08:09.045 Disk stats (read/write): 00:08:09.045 nvme0n1: ios=2074/2107, merge=0/0, ticks=1469/336, in_queue=1805, util=98.60% 00:08:09.045 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:09.302 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.302 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:09.302 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:09.302 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:09.302 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.303 rmmod nvme_tcp 00:08:09.303 rmmod nvme_fabrics 00:08:09.303 rmmod nvme_keyring 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2696498 ']' 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2696498 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2696498 ']' 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2696498 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.303 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2696498 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2696498' 00:08:09.561 killing process with pid 2696498 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2696498 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2696498 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.561 16:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.092 00:08:12.092 real 0m14.549s 00:08:12.092 user 0m32.900s 00:08:12.092 sys 0m5.143s 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.092 ************************************ 00:08:12.092 END TEST nvmf_nmic 00:08:12.092 ************************************ 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.092 ************************************ 00:08:12.092 START TEST nvmf_fio_target 00:08:12.092 ************************************ 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:12.092 * Looking for test storage... 00:08:12.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.092 --rc genhtml_branch_coverage=1 00:08:12.092 --rc genhtml_function_coverage=1 00:08:12.092 --rc genhtml_legend=1 00:08:12.092 --rc geninfo_all_blocks=1 00:08:12.092 --rc geninfo_unexecuted_blocks=1 00:08:12.092 00:08:12.092 ' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.092 --rc genhtml_branch_coverage=1 00:08:12.092 --rc genhtml_function_coverage=1 00:08:12.092 --rc genhtml_legend=1 00:08:12.092 --rc geninfo_all_blocks=1 00:08:12.092 --rc geninfo_unexecuted_blocks=1 00:08:12.092 00:08:12.092 ' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.092 --rc genhtml_branch_coverage=1 00:08:12.092 --rc genhtml_function_coverage=1 00:08:12.092 --rc genhtml_legend=1 00:08:12.092 --rc geninfo_all_blocks=1 00:08:12.092 --rc geninfo_unexecuted_blocks=1 00:08:12.092 00:08:12.092 ' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.092 --rc genhtml_branch_coverage=1 00:08:12.092 --rc genhtml_function_coverage=1 00:08:12.092 --rc genhtml_legend=1 00:08:12.092 --rc geninfo_all_blocks=1 00:08:12.092 --rc geninfo_unexecuted_blocks=1 00:08:12.092 00:08:12.092 ' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.092 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.093 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.351 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.352 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.352 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:08:17.352 00:08:17.352 --- 10.0.0.2 ping statistics --- 00:08:17.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.352 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:08:17.352 00:08:17.352 --- 10.0.0.1 ping statistics --- 00:08:17.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.352 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2701128 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2701128 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2701128 ']' 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.352 [2024-11-04 16:19:43.656781] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:08:17.352 [2024-11-04 16:19:43.656826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.352 [2024-11-04 16:19:43.723505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.352 [2024-11-04 16:19:43.765938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.352 [2024-11-04 16:19:43.765976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.352 [2024-11-04 16:19:43.765983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.352 [2024-11-04 16:19:43.765989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.352 [2024-11-04 16:19:43.765995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.352 [2024-11-04 16:19:43.767404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.352 [2024-11-04 16:19:43.767503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.352 [2024-11-04 16:19:43.767595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.352 [2024-11-04 16:19:43.767596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.352 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.353 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.353 [2024-11-04 16:19:44.064648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.353 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.610 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:17.610 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.867 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:17.867 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.124 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:18.124 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.381 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:18.381 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:18.381 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.638 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:18.638 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.896 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:18.896 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.153 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:19.153 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:19.410 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.410 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:19.410 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.667 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:19.667 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.924 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.181 [2024-11-04 16:19:46.762687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.181 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:20.181 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:20.438 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:21.809 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:23.784 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:23.784 [global] 00:08:23.784 thread=1 00:08:23.784 invalidate=1 00:08:23.784 rw=write 00:08:23.784 time_based=1 00:08:23.784 runtime=1 00:08:23.784 ioengine=libaio 00:08:23.784 direct=1 00:08:23.784 bs=4096 00:08:23.784 iodepth=1 00:08:23.784 norandommap=0 00:08:23.784 numjobs=1 00:08:23.784 00:08:23.784 verify_dump=1 00:08:23.784 verify_backlog=512 00:08:23.784 verify_state_save=0 00:08:23.784 do_verify=1 00:08:23.784 verify=crc32c-intel 00:08:23.784 [job0] 00:08:23.784 filename=/dev/nvme0n1 00:08:23.784 [job1] 00:08:23.784 filename=/dev/nvme0n2 00:08:23.784 [job2] 00:08:23.784 filename=/dev/nvme0n3 00:08:23.784 [job3] 00:08:23.784 filename=/dev/nvme0n4 00:08:23.784 Could not set queue depth (nvme0n1) 00:08:23.784 Could not set queue depth (nvme0n2) 00:08:23.784 Could not set queue depth (nvme0n3) 00:08:23.784 Could not set queue depth (nvme0n4) 00:08:24.041 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.041 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.041 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.041 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.041 fio-3.35 00:08:24.041 Starting 4 threads 00:08:25.411 00:08:25.411 job0: (groupid=0, jobs=1): err= 0: pid=2702482: Mon Nov 4 16:19:51 2024 00:08:25.411 read: IOPS=373, BW=1493KiB/s (1528kB/s)(1500KiB/1005msec) 00:08:25.411 slat (nsec): min=7082, max=25370, avg=8880.91, stdev=3496.89 00:08:25.411 clat (usec): min=178, max=42097, avg=2421.60, stdev=9171.99 00:08:25.412 lat (usec): min=190, max=42120, avg=2430.48, stdev=9175.04 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:08:25.412 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:08:25.412 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[40633], 00:08:25.412 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:25.412 | 99.99th=[42206] 00:08:25.412 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:25.412 slat (nsec): min=9537, max=51391, avg=10737.50, stdev=2178.36 00:08:25.412 clat (usec): min=133, max=519, avg=166.08, stdev=33.75 00:08:25.412 lat (usec): min=144, max=530, avg=176.82, stdev=34.21 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:08:25.412 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 165], 00:08:25.412 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:08:25.412 | 99.00th=[ 285], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 519], 00:08:25.412 | 99.99th=[ 519] 00:08:25.412 bw ( KiB/s): min= 4096, max= 4096, per=22.82%, avg=4096.00, stdev= 0.00, samples=1 00:08:25.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:25.412 lat (usec) : 250=82.53%, 500=14.88%, 750=0.34% 00:08:25.412 lat (msec) : 50=2.25% 00:08:25.412 cpu : usr=0.60%, sys=1.00%, ctx=888, majf=0, minf=2 00:08:25.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 issued rwts: total=375,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.412 job1: (groupid=0, jobs=1): err= 0: pid=2702483: Mon Nov 4 16:19:51 2024 00:08:25.412 read: IOPS=2262, BW=9051KiB/s (9268kB/s)(9060KiB/1001msec) 00:08:25.412 slat (nsec): min=4859, max=41751, avg=8068.96, stdev=1879.42 00:08:25.412 clat (usec): min=172, max=653, avg=239.90, stdev=54.34 00:08:25.412 lat (usec): min=181, max=661, avg=247.97, stdev=54.29 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:08:25.412 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 243], 00:08:25.412 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 314], 00:08:25.412 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 644], 99.95th=[ 652], 00:08:25.412 | 99.99th=[ 652] 00:08:25.412 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:25.412 slat (nsec): min=9721, max=39489, avg=11336.42, stdev=1775.41 00:08:25.412 clat (usec): min=115, max=355, avg=155.02, stdev=26.48 00:08:25.412 lat (usec): min=126, max=395, avg=166.35, stdev=26.74 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 135], 00:08:25.412 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:08:25.412 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 208], 00:08:25.412 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 338], 00:08:25.412 | 99.99th=[ 355] 00:08:25.412 bw ( KiB/s): min=11256, max=11256, per=62.72%, avg=11256.00, stdev= 0.00, samples=1 00:08:25.412 iops : min= 2814, max= 2814, avg=2814.00, stdev= 0.00, samples=1 00:08:25.412 lat (usec) : 250=86.82%, 500=12.54%, 750=0.64% 00:08:25.412 cpu : usr=3.10%, sys=6.50%, ctx=4827, majf=0, minf=1 00:08:25.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 issued rwts: total=2265,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.412 job2: (groupid=0, jobs=1): err= 0: pid=2702484: Mon Nov 4 16:19:51 2024 00:08:25.412 read: IOPS=468, BW=1873KiB/s (1918kB/s)(1924KiB/1027msec) 00:08:25.412 slat (nsec): min=2294, max=33133, avg=5010.11, stdev=4804.52 00:08:25.412 clat (usec): min=180, max=41974, avg=1931.27, stdev=8167.92 00:08:25.412 lat (usec): min=183, max=41998, avg=1936.28, stdev=8171.70 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:08:25.412 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:08:25.412 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 306], 00:08:25.412 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:25.412 | 99.99th=[42206] 00:08:25.412 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:08:25.412 slat (nsec): min=4046, max=38785, avg=9812.80, stdev=3028.80 00:08:25.412 clat (usec): min=139, max=400, avg=172.29, stdev=19.31 00:08:25.412 lat (usec): min=144, max=439, avg=182.10, stdev=19.08 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:08:25.412 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:25.412 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:08:25.412 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 400], 99.95th=[ 400], 00:08:25.412 | 99.99th=[ 400] 00:08:25.412 bw ( KiB/s): min= 4096, max= 4096, per=22.82%, avg=4096.00, stdev= 0.00, samples=1 00:08:25.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:25.412 lat (usec) : 250=89.73%, 500=8.26% 00:08:25.412 lat (msec) : 50=2.01% 00:08:25.412 cpu : usr=0.29%, sys=0.97%, ctx=994, majf=0, minf=1 00:08:25.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 issued rwts: total=481,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.412 job3: (groupid=0, jobs=1): err= 0: pid=2702485: Mon Nov 4 16:19:51 2024 00:08:25.412 read: IOPS=914, BW=3656KiB/s (3744kB/s)(3660KiB/1001msec) 00:08:25.412 slat (nsec): min=7607, max=44641, avg=8900.10, stdev=2877.75 00:08:25.412 clat (usec): min=199, max=41409, avg=814.76, stdev=4823.31 00:08:25.412 lat (usec): min=207, max=41420, avg=823.66, stdev=4824.69 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 225], 00:08:25.412 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:08:25.412 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:08:25.412 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:25.412 | 99.99th=[41157] 00:08:25.412 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:08:25.412 slat (usec): min=10, max=31088, avg=66.16, stdev=1211.46 00:08:25.412 clat (usec): min=122, max=398, avg=168.65, stdev=23.52 00:08:25.412 lat (usec): min=135, max=31382, avg=234.81, stdev=1217.30 00:08:25.412 clat percentiles (usec): 00:08:25.412 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:08:25.412 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:08:25.412 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 210], 00:08:25.412 | 99.00th=[ 239], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 400], 00:08:25.412 | 99.99th=[ 400] 00:08:25.412 bw ( KiB/s): min= 4096, max= 4096, per=22.82%, avg=4096.00, stdev= 0.00, samples=1 00:08:25.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:25.412 lat (usec) : 250=92.32%, 500=6.96%, 750=0.05% 00:08:25.412 lat (msec) : 50=0.67% 00:08:25.412 cpu : usr=1.30%, sys=3.20%, ctx=1942, majf=0, minf=1 00:08:25.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.412 issued rwts: total=915,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.412 00:08:25.412 Run status group 0 (all jobs): 00:08:25.412 READ: bw=15.4MiB/s (16.1MB/s), 1493KiB/s-9051KiB/s (1528kB/s-9268kB/s), io=15.8MiB (16.5MB), run=1001-1027msec 00:08:25.412 WRITE: bw=17.5MiB/s (18.4MB/s), 1994KiB/s-9.99MiB/s (2042kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1027msec 00:08:25.412 00:08:25.412 Disk stats (read/write): 00:08:25.412 nvme0n1: ios=394/512, merge=0/0, ticks=1604/84, in_queue=1688, util=85.57% 00:08:25.412 nvme0n2: ios=2023/2048, merge=0/0, ticks=1354/301, in_queue=1655, util=89.83% 00:08:25.412 nvme0n3: ios=499/512, merge=0/0, ticks=1629/88, in_queue=1717, util=93.64% 00:08:25.412 nvme0n4: ios=564/743, merge=0/0, ticks=983/122, in_queue=1105, util=94.96% 00:08:25.412 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:25.412 [global] 00:08:25.412 thread=1 00:08:25.412 invalidate=1 00:08:25.412 rw=randwrite 00:08:25.412 time_based=1 00:08:25.412 runtime=1 00:08:25.412 ioengine=libaio 00:08:25.412 direct=1 00:08:25.412 bs=4096 00:08:25.412 iodepth=1 00:08:25.412 norandommap=0 00:08:25.412 numjobs=1 00:08:25.412 00:08:25.412 verify_dump=1 00:08:25.412 verify_backlog=512 00:08:25.412 verify_state_save=0 00:08:25.412 do_verify=1 00:08:25.412 verify=crc32c-intel 00:08:25.412 [job0] 00:08:25.412 filename=/dev/nvme0n1 00:08:25.412 [job1] 00:08:25.412 filename=/dev/nvme0n2 00:08:25.412 [job2] 00:08:25.412 filename=/dev/nvme0n3 00:08:25.412 [job3] 00:08:25.412 filename=/dev/nvme0n4 00:08:25.412 Could not set queue depth (nvme0n1) 00:08:25.412 Could not set queue depth (nvme0n2) 00:08:25.412 Could not set queue depth (nvme0n3) 00:08:25.412 Could not set queue depth (nvme0n4) 00:08:25.670 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:25.670 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:25.670 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:25.670 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:25.670 fio-3.35 00:08:25.670 Starting 4 threads 00:08:27.045 00:08:27.045 job0: (groupid=0, jobs=1): err= 0: pid=2702857: Mon Nov 4 16:19:53 2024 00:08:27.045 read: IOPS=522, BW=2089KiB/s (2139kB/s)(2116KiB/1013msec) 00:08:27.045 slat (nsec): min=7674, max=23944, avg=8915.06, stdev=2401.16 00:08:27.045 clat (usec): min=189, max=41052, avg=1544.07, stdev=7186.82 00:08:27.045 lat (usec): min=197, max=41066, avg=1552.98, stdev=7188.95 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:08:27.045 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:08:27.045 | 70.00th=[ 243], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:08:27.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:27.045 | 99.99th=[41157] 00:08:27.045 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:08:27.045 slat (nsec): min=10654, max=43600, avg=12458.06, stdev=2021.58 00:08:27.045 clat (usec): min=124, max=336, avg=169.75, stdev=17.47 00:08:27.045 lat (usec): min=134, max=349, avg=182.21, stdev=18.05 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:08:27.045 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:27.045 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:08:27.045 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 322], 99.95th=[ 338], 00:08:27.045 | 99.99th=[ 338] 00:08:27.045 bw ( KiB/s): min= 8192, max= 8192, per=67.53%, avg=8192.00, stdev= 0.00, samples=1 00:08:27.045 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:27.045 lat (usec) : 250=90.41%, 500=8.50% 00:08:27.045 lat (msec) : 50=1.09% 00:08:27.045 cpu : usr=0.99%, sys=2.87%, ctx=1556, majf=0, minf=1 00:08:27.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:27.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:27.045 job1: (groupid=0, jobs=1): err= 0: pid=2702858: Mon Nov 4 16:19:53 2024 00:08:27.045 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:08:27.045 slat (nsec): min=11347, max=25061, avg=20878.32, stdev=3727.51 00:08:27.045 clat (usec): min=40648, max=42016, avg=41091.94, stdev=359.62 00:08:27.045 lat (usec): min=40659, max=42038, avg=41112.82, stdev=360.72 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:27.045 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:27.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:08:27.045 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:27.045 | 99.99th=[42206] 00:08:27.045 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:08:27.045 slat (nsec): min=10430, max=40247, avg=12478.75, stdev=1839.49 00:08:27.045 clat (usec): min=139, max=330, avg=178.90, stdev=14.66 00:08:27.045 lat (usec): min=151, max=343, avg=191.37, stdev=14.78 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:08:27.045 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:08:27.045 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:08:27.045 | 99.00th=[ 229], 99.50th=[ 245], 99.90th=[ 330], 99.95th=[ 330], 00:08:27.045 | 99.99th=[ 330] 00:08:27.045 bw ( KiB/s): min= 4096, max= 4096, per=33.77%, avg=4096.00, stdev= 0.00, samples=1 00:08:27.045 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:27.045 lat (usec) : 250=95.51%, 500=0.37% 00:08:27.045 lat (msec) : 50=4.12% 00:08:27.045 cpu : usr=0.40%, sys=0.60%, ctx=534, majf=0, minf=2 00:08:27.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:27.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:27.045 job2: (groupid=0, jobs=1): err= 0: pid=2702859: Mon Nov 4 16:19:53 2024 00:08:27.045 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:08:27.045 slat (nsec): min=10168, max=27017, avg=22979.83, stdev=4300.78 00:08:27.045 clat (usec): min=307, max=41136, avg=39200.99, stdev=8479.04 00:08:27.045 lat (usec): min=329, max=41147, avg=39223.97, stdev=8479.12 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:27.045 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:27.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:27.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:27.045 | 99.99th=[41157] 00:08:27.045 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:27.045 slat (nsec): min=10932, max=38675, avg=12558.99, stdev=2098.99 00:08:27.045 clat (usec): min=150, max=329, avg=183.42, stdev=16.19 00:08:27.045 lat (usec): min=163, max=368, avg=195.98, stdev=16.79 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:08:27.045 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:08:27.045 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 206], 00:08:27.045 | 99.00th=[ 227], 99.50th=[ 265], 99.90th=[ 330], 99.95th=[ 330], 00:08:27.045 | 99.99th=[ 330] 00:08:27.045 bw ( KiB/s): min= 4096, max= 4096, per=33.77%, avg=4096.00, stdev= 0.00, samples=1 00:08:27.045 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:27.045 lat (usec) : 250=94.95%, 500=0.93% 00:08:27.045 lat (msec) : 50=4.11% 00:08:27.045 cpu : usr=0.40%, sys=1.00%, ctx=536, majf=0, minf=1 00:08:27.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:27.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:27.045 job3: (groupid=0, jobs=1): err= 0: pid=2702862: Mon Nov 4 16:19:53 2024 00:08:27.045 read: IOPS=524, BW=2099KiB/s (2149kB/s)(2120KiB/1010msec) 00:08:27.045 slat (nsec): min=8531, max=39652, avg=10752.25, stdev=4445.69 00:08:27.045 clat (usec): min=185, max=41030, avg=1524.28, stdev=7180.31 00:08:27.045 lat (usec): min=195, max=41053, avg=1535.03, stdev=7182.53 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:08:27.045 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:08:27.045 | 70.00th=[ 227], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 247], 00:08:27.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:27.045 | 99.99th=[41157] 00:08:27.045 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:08:27.045 slat (nsec): min=11229, max=54391, avg=14779.88, stdev=4604.45 00:08:27.045 clat (usec): min=133, max=304, avg=171.60, stdev=14.90 00:08:27.045 lat (usec): min=155, max=343, avg=186.38, stdev=16.41 00:08:27.045 clat percentiles (usec): 00:08:27.045 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:08:27.045 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:08:27.045 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:08:27.045 | 99.00th=[ 210], 99.50th=[ 227], 99.90th=[ 297], 99.95th=[ 306], 00:08:27.045 | 99.99th=[ 306] 00:08:27.045 bw ( KiB/s): min= 8192, max= 8192, per=67.53%, avg=8192.00, stdev= 0.00, samples=1 00:08:27.045 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:27.045 lat (usec) : 250=98.58%, 500=0.32% 00:08:27.045 lat (msec) : 50=1.09% 00:08:27.045 cpu : usr=1.78%, sys=2.48%, ctx=1555, majf=0, minf=1 00:08:27.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:27.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.045 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:27.045 00:08:27.045 Run status group 0 (all jobs): 00:08:27.045 READ: bw=4359KiB/s (4464kB/s), 87.6KiB/s-2099KiB/s (89.8kB/s-2149kB/s), io=4416KiB (4522kB), run=1004-1013msec 00:08:27.045 WRITE: bw=11.8MiB/s (12.4MB/s), 2038KiB/s-4055KiB/s (2087kB/s-4153kB/s), io=12.0MiB (12.6MB), run=1004-1013msec 00:08:27.045 00:08:27.045 Disk stats (read/write): 00:08:27.045 nvme0n1: ios=547/1024, merge=0/0, ticks=1508/160, in_queue=1668, util=86.06% 00:08:27.045 nvme0n2: ios=68/512, merge=0/0, ticks=803/90, in_queue=893, util=90.96% 00:08:27.045 nvme0n3: ios=42/512, merge=0/0, ticks=1646/87, in_queue=1733, util=93.66% 00:08:27.045 nvme0n4: ios=548/1024, merge=0/0, ticks=1539/154, in_queue=1693, util=94.24% 00:08:27.045 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:27.045 [global] 00:08:27.045 thread=1 00:08:27.045 invalidate=1 00:08:27.045 rw=write 00:08:27.045 time_based=1 00:08:27.045 runtime=1 00:08:27.045 ioengine=libaio 00:08:27.045 direct=1 00:08:27.046 bs=4096 00:08:27.046 iodepth=128 00:08:27.046 norandommap=0 00:08:27.046 numjobs=1 00:08:27.046 00:08:27.046 verify_dump=1 00:08:27.046 verify_backlog=512 00:08:27.046 verify_state_save=0 00:08:27.046 do_verify=1 00:08:27.046 verify=crc32c-intel 00:08:27.046 [job0] 00:08:27.046 filename=/dev/nvme0n1 00:08:27.046 [job1] 00:08:27.046 filename=/dev/nvme0n2 00:08:27.046 [job2] 00:08:27.046 filename=/dev/nvme0n3 00:08:27.046 [job3] 00:08:27.046 filename=/dev/nvme0n4 00:08:27.046 Could not set queue depth (nvme0n1) 00:08:27.046 Could not set queue depth (nvme0n2) 00:08:27.046 Could not set queue depth (nvme0n3) 00:08:27.046 Could not set queue depth (nvme0n4) 00:08:27.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.303 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.303 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.303 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.303 fio-3.35 00:08:27.303 Starting 4 threads 00:08:28.682 00:08:28.682 job0: (groupid=0, jobs=1): err= 0: pid=2703231: Mon Nov 4 16:19:55 2024 00:08:28.682 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:08:28.682 slat (usec): min=3, max=51213, avg=322.64, stdev=2652.06 00:08:28.682 clat (msec): min=11, max=129, avg=41.14, stdev=29.64 00:08:28.682 lat (msec): min=11, max=129, avg=41.46, stdev=29.76 00:08:28.682 clat percentiles (msec): 00:08:28.682 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:08:28.682 | 30.00th=[ 20], 40.00th=[ 24], 50.00th=[ 37], 60.00th=[ 42], 00:08:28.682 | 70.00th=[ 45], 80.00th=[ 65], 90.00th=[ 93], 95.00th=[ 97], 00:08:28.682 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:08:28.682 | 99.99th=[ 130] 00:08:28.682 write: IOPS=1954, BW=7817KiB/s (8004kB/s)(7840KiB/1003msec); 0 zone resets 00:08:28.682 slat (usec): min=5, max=19586, avg=248.72, stdev=1382.63 00:08:28.682 clat (usec): min=2673, max=66306, avg=29626.32, stdev=15591.26 00:08:28.682 lat (usec): min=5324, max=74303, avg=29875.04, stdev=15698.48 00:08:28.682 clat percentiles (usec): 00:08:28.682 | 1.00th=[ 8717], 5.00th=[12387], 10.00th=[12780], 20.00th=[12911], 00:08:28.682 | 30.00th=[19530], 40.00th=[23987], 50.00th=[27395], 60.00th=[28967], 00:08:28.682 | 70.00th=[35914], 80.00th=[45876], 90.00th=[53740], 95.00th=[60031], 00:08:28.682 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:08:28.682 | 99.99th=[66323] 00:08:28.682 bw ( KiB/s): min= 6984, max= 7680, per=10.96%, avg=7332.00, stdev=492.15, samples=2 00:08:28.682 iops : min= 1746, max= 1920, avg=1833.00, stdev=123.04, samples=2 00:08:28.682 lat (msec) : 4=0.03%, 10=1.00%, 20=29.61%, 50=50.89%, 100=16.68% 00:08:28.682 lat (msec) : 250=1.80% 00:08:28.682 cpu : usr=2.00%, sys=2.99%, ctx=142, majf=0, minf=1 00:08:28.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:08:28.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:28.682 issued rwts: total=1536,1960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:28.683 job1: (groupid=0, jobs=1): err= 0: pid=2703232: Mon Nov 4 16:19:55 2024 00:08:28.683 read: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1004msec) 00:08:28.683 slat (nsec): min=1404, max=9243.1k, avg=88101.51, stdev=529154.30 00:08:28.683 clat (usec): min=2873, max=21255, avg=11738.89, stdev=2281.28 00:08:28.683 lat (usec): min=3205, max=21283, avg=11826.99, stdev=2318.14 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 5145], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10028], 00:08:28.683 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11863], 60.00th=[12125], 00:08:28.683 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14353], 95.00th=[16188], 00:08:28.683 | 99.00th=[18220], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:08:28.683 | 99.99th=[21365] 00:08:28.683 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:08:28.683 slat (usec): min=2, max=26108, avg=101.84, stdev=708.48 00:08:28.683 clat (usec): min=2926, max=56333, avg=12894.22, stdev=6895.03 00:08:28.683 lat (usec): min=2935, max=56336, avg=12996.06, stdev=6953.80 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 5145], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[ 9896], 00:08:28.683 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:08:28.683 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15533], 95.00th=[24249], 00:08:28.683 | 99.00th=[47449], 99.50th=[49021], 99.90th=[56361], 99.95th=[56361], 00:08:28.683 | 99.99th=[56361] 00:08:28.683 bw ( KiB/s): min=20480, max=20480, per=30.61%, avg=20480.00, stdev= 0.00, samples=2 00:08:28.683 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:28.683 lat (msec) : 4=0.15%, 10=19.53%, 20=76.86%, 50=3.25%, 100=0.21% 00:08:28.683 cpu : usr=4.59%, sys=7.28%, ctx=383, majf=0, minf=1 00:08:28.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:28.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:28.683 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:28.683 job2: (groupid=0, jobs=1): err= 0: pid=2703236: Mon Nov 4 16:19:55 2024 00:08:28.683 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:08:28.683 slat (nsec): min=1061, max=17538k, avg=104520.68, stdev=775049.66 00:08:28.683 clat (usec): min=3867, max=61808, avg=12853.85, stdev=5418.36 00:08:28.683 lat (usec): min=3872, max=61816, avg=12958.37, stdev=5490.87 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 4490], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[10028], 00:08:28.683 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:08:28.683 | 70.00th=[12387], 80.00th=[14353], 90.00th=[18744], 95.00th=[21365], 00:08:28.683 | 99.00th=[40109], 99.50th=[49546], 99.90th=[55837], 99.95th=[61604], 00:08:28.683 | 99.99th=[61604] 00:08:28.683 write: IOPS=5328, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1010msec); 0 zone resets 00:08:28.683 slat (usec): min=2, max=13712, avg=76.73, stdev=429.75 00:08:28.683 clat (usec): min=550, max=61789, avg=11574.22, stdev=7978.63 00:08:28.683 lat (usec): min=557, max=61795, avg=11650.96, stdev=8029.26 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 3359], 5.00th=[ 4948], 10.00th=[ 6325], 20.00th=[ 7504], 00:08:28.683 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[11469], 00:08:28.683 | 70.00th=[11600], 80.00th=[11863], 90.00th=[14222], 95.00th=[21365], 00:08:28.683 | 99.00th=[51643], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:08:28.683 | 99.99th=[61604] 00:08:28.683 bw ( KiB/s): min=18376, max=23664, per=31.41%, avg=21020.00, stdev=3739.18, samples=2 00:08:28.683 iops : min= 4594, max= 5916, avg=5255.00, stdev=934.80, samples=2 00:08:28.683 lat (usec) : 750=0.05% 00:08:28.683 lat (msec) : 2=0.10%, 4=0.96%, 10=30.92%, 20=61.65%, 50=4.98% 00:08:28.683 lat (msec) : 100=1.35% 00:08:28.683 cpu : usr=3.77%, sys=5.05%, ctx=576, majf=0, minf=1 00:08:28.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:28.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:28.683 issued rwts: total=5120,5382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:28.683 job3: (groupid=0, jobs=1): err= 0: pid=2703240: Mon Nov 4 16:19:55 2024 00:08:28.683 read: IOPS=4504, BW=17.6MiB/s (18.4MB/s)(18.5MiB/1051msec) 00:08:28.683 slat (nsec): min=1270, max=13941k, avg=105300.49, stdev=754663.01 00:08:28.683 clat (usec): min=4643, max=62842, avg=13732.33, stdev=7932.46 00:08:28.683 lat (usec): min=4661, max=62845, avg=13837.63, stdev=7960.86 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 5800], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10028], 00:08:28.683 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:08:28.683 | 70.00th=[12780], 80.00th=[16188], 90.00th=[18744], 95.00th=[21365], 00:08:28.683 | 99.00th=[57934], 99.50th=[60556], 99.90th=[62653], 99.95th=[62653], 00:08:28.683 | 99.99th=[62653] 00:08:28.683 write: IOPS=4871, BW=19.0MiB/s (20.0MB/s)(20.0MiB/1051msec); 0 zone resets 00:08:28.683 slat (usec): min=2, max=19100, avg=92.69, stdev=614.68 00:08:28.683 clat (usec): min=791, max=63720, avg=13264.32, stdev=9273.10 00:08:28.683 lat (usec): min=915, max=63730, avg=13357.00, stdev=9329.84 00:08:28.683 clat percentiles (usec): 00:08:28.683 | 1.00th=[ 4178], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[10159], 00:08:28.683 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:08:28.683 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12911], 95.00th=[28181], 00:08:28.683 | 99.00th=[62653], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:08:28.683 | 99.99th=[63701] 00:08:28.683 bw ( KiB/s): min=18416, max=22528, per=30.59%, avg=20472.00, stdev=2907.62, samples=2 00:08:28.683 iops : min= 4604, max= 5632, avg=5118.00, stdev=726.91, samples=2 00:08:28.683 lat (usec) : 1000=0.01% 00:08:28.683 lat (msec) : 4=0.35%, 10=19.22%, 20=72.60%, 50=5.27%, 100=2.56% 00:08:28.683 cpu : usr=3.81%, sys=4.48%, ctx=619, majf=0, minf=1 00:08:28.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:28.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:28.683 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:28.683 00:08:28.683 Run status group 0 (all jobs): 00:08:28.683 READ: bw=60.6MiB/s (63.6MB/s), 6126KiB/s-19.8MiB/s (6273kB/s-20.8MB/s), io=63.7MiB (66.8MB), run=1003-1051msec 00:08:28.683 WRITE: bw=65.3MiB/s (68.5MB/s), 7817KiB/s-20.8MiB/s (8004kB/s-21.8MB/s), io=68.7MiB (72.0MB), run=1003-1051msec 00:08:28.683 00:08:28.683 Disk stats (read/write): 00:08:28.683 nvme0n1: ios=1461/1536, merge=0/0, ticks=17604/12641, in_queue=30245, util=86.97% 00:08:28.683 nvme0n2: ios=3982/4096, merge=0/0, ticks=28621/36113, in_queue=64734, util=90.65% 00:08:28.683 nvme0n3: ios=4153/4151, merge=0/0, ticks=50953/45012, in_queue=95965, util=89.49% 00:08:28.683 nvme0n4: ios=3641/4079, merge=0/0, ticks=42168/42232, in_queue=84400, util=94.48% 00:08:28.683 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:28.683 [global] 00:08:28.683 thread=1 00:08:28.683 invalidate=1 00:08:28.683 rw=randwrite 00:08:28.683 time_based=1 00:08:28.683 runtime=1 00:08:28.683 ioengine=libaio 00:08:28.683 direct=1 00:08:28.683 bs=4096 00:08:28.683 iodepth=128 00:08:28.683 norandommap=0 00:08:28.683 numjobs=1 00:08:28.683 00:08:28.683 verify_dump=1 00:08:28.683 verify_backlog=512 00:08:28.683 verify_state_save=0 00:08:28.683 do_verify=1 00:08:28.683 verify=crc32c-intel 00:08:28.683 [job0] 00:08:28.683 filename=/dev/nvme0n1 00:08:28.683 [job1] 00:08:28.683 filename=/dev/nvme0n2 00:08:28.683 [job2] 00:08:28.683 filename=/dev/nvme0n3 00:08:28.683 [job3] 00:08:28.683 filename=/dev/nvme0n4 00:08:28.683 Could not set queue depth (nvme0n1) 00:08:28.683 Could not set queue depth (nvme0n2) 00:08:28.683 Could not set queue depth (nvme0n3) 00:08:28.683 Could not set queue depth (nvme0n4) 00:08:28.940 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.940 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.940 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.940 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.940 fio-3.35 00:08:28.940 Starting 4 threads 00:08:30.309 00:08:30.309 job0: (groupid=0, jobs=1): err= 0: pid=2703674: Mon Nov 4 16:19:56 2024 00:08:30.310 read: IOPS=5446, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1004msec) 00:08:30.310 slat (nsec): min=1385, max=10697k, avg=83681.02, stdev=634953.75 00:08:30.310 clat (usec): min=493, max=105961, avg=11513.30, stdev=6931.08 00:08:30.310 lat (usec): min=502, max=105966, avg=11596.98, stdev=6956.57 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 979], 5.00th=[ 3130], 10.00th=[ 7373], 20.00th=[ 9503], 00:08:30.310 | 30.00th=[ 9896], 40.00th=[ 10552], 50.00th=[ 11207], 60.00th=[ 11469], 00:08:30.310 | 70.00th=[ 11731], 80.00th=[ 13435], 90.00th=[ 16581], 95.00th=[ 18744], 00:08:30.310 | 99.00th=[ 21365], 99.50th=[ 21890], 99.90th=[105382], 99.95th=[106431], 00:08:30.310 | 99.99th=[106431] 00:08:30.310 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:08:30.310 slat (usec): min=2, max=17027, avg=74.27, stdev=508.56 00:08:30.310 clat (usec): min=449, max=62191, avg=11367.47, stdev=7064.24 00:08:30.310 lat (usec): min=465, max=62199, avg=11441.75, stdev=7078.21 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 1598], 5.00th=[ 4686], 10.00th=[ 6325], 20.00th=[ 7504], 00:08:30.310 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11338], 60.00th=[11600], 00:08:30.310 | 70.00th=[11863], 80.00th=[11994], 90.00th=[13566], 95.00th=[18482], 00:08:30.310 | 99.00th=[52167], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:08:30.310 | 99.99th=[62129] 00:08:30.310 bw ( KiB/s): min=20624, max=24432, per=28.99%, avg=22528.00, stdev=2692.66, samples=2 00:08:30.310 iops : min= 5156, max= 6108, avg=5632.00, stdev=673.17, samples=2 00:08:30.310 lat (usec) : 500=0.05%, 750=0.12%, 1000=0.59% 00:08:30.310 lat (msec) : 2=1.50%, 4=3.23%, 10=28.35%, 20=62.68%, 50=2.50% 00:08:30.310 lat (msec) : 100=0.85%, 250=0.13% 00:08:30.310 cpu : usr=3.19%, sys=7.98%, ctx=595, majf=0, minf=1 00:08:30.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:30.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.310 issued rwts: total=5468,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.310 job1: (groupid=0, jobs=1): err= 0: pid=2703691: Mon Nov 4 16:19:56 2024 00:08:30.310 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:08:30.310 slat (nsec): min=1092, max=10142k, avg=94206.80, stdev=557528.94 00:08:30.310 clat (usec): min=2115, max=64426, avg=12747.20, stdev=3756.03 00:08:30.310 lat (usec): min=2119, max=64456, avg=12841.40, stdev=3767.83 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 5276], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:08:30.310 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:08:30.310 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14353], 95.00th=[16909], 00:08:30.310 | 99.00th=[26346], 99.50th=[28181], 99.90th=[61080], 99.95th=[61080], 00:08:30.310 | 99.99th=[64226] 00:08:30.310 write: IOPS=4941, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec); 0 zone resets 00:08:30.310 slat (usec): min=2, max=15590, avg=90.74, stdev=528.97 00:08:30.310 clat (usec): min=1079, max=66294, avg=13842.05, stdev=7700.44 00:08:30.310 lat (usec): min=1090, max=66301, avg=13932.78, stdev=7738.74 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 7963], 20.00th=[10028], 00:08:30.310 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:08:30.310 | 70.00th=[12649], 80.00th=[14877], 90.00th=[22938], 95.00th=[28443], 00:08:30.310 | 99.00th=[52167], 99.50th=[57934], 99.90th=[66323], 99.95th=[66323], 00:08:30.310 | 99.99th=[66323] 00:08:30.310 bw ( KiB/s): min=16640, max=21952, per=24.83%, avg=19296.00, stdev=3756.15, samples=2 00:08:30.310 iops : min= 4160, max= 5488, avg=4824.00, stdev=939.04, samples=2 00:08:30.310 lat (msec) : 2=0.15%, 4=0.26%, 10=12.06%, 20=78.79%, 50=7.94% 00:08:30.310 lat (msec) : 100=0.80% 00:08:30.310 cpu : usr=3.50%, sys=4.90%, ctx=449, majf=0, minf=2 00:08:30.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:30.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.310 issued rwts: total=4608,4951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.310 job2: (groupid=0, jobs=1): err= 0: pid=2703710: Mon Nov 4 16:19:56 2024 00:08:30.310 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:08:30.310 slat (nsec): min=1128, max=11589k, avg=130012.28, stdev=681414.65 00:08:30.310 clat (usec): min=9138, max=52996, avg=16775.11, stdev=6231.11 00:08:30.310 lat (usec): min=9145, max=57827, avg=16905.12, stdev=6228.53 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[10945], 5.00th=[11994], 10.00th=[13173], 20.00th=[13566], 00:08:30.310 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:08:30.310 | 70.00th=[15401], 80.00th=[19530], 90.00th=[24511], 95.00th=[29230], 00:08:30.310 | 99.00th=[49021], 99.50th=[49021], 99.90th=[53216], 99.95th=[53216], 00:08:30.310 | 99.99th=[53216] 00:08:30.310 write: IOPS=3791, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1002msec); 0 zone resets 00:08:30.310 slat (usec): min=2, max=21029, avg=133.15, stdev=862.23 00:08:30.310 clat (usec): min=479, max=53937, avg=17510.19, stdev=8418.09 00:08:30.310 lat (usec): min=3755, max=53947, avg=17643.35, stdev=8460.51 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 7373], 5.00th=[11469], 10.00th=[12387], 20.00th=[13566], 00:08:30.310 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14746], 00:08:30.310 | 70.00th=[15926], 80.00th=[18482], 90.00th=[30278], 95.00th=[39060], 00:08:30.310 | 99.00th=[52167], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:08:30.310 | 99.99th=[53740] 00:08:30.310 bw ( KiB/s): min=12288, max=17080, per=18.90%, avg=14684.00, stdev=3388.46, samples=2 00:08:30.310 iops : min= 3072, max= 4270, avg=3671.00, stdev=847.11, samples=2 00:08:30.310 lat (usec) : 500=0.01% 00:08:30.310 lat (msec) : 4=0.24%, 10=0.77%, 20=79.95%, 50=18.27%, 100=0.74% 00:08:30.310 cpu : usr=2.60%, sys=4.70%, ctx=365, majf=0, minf=1 00:08:30.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:30.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.310 issued rwts: total=3584,3799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.310 job3: (groupid=0, jobs=1): err= 0: pid=2703717: Mon Nov 4 16:19:56 2024 00:08:30.310 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:08:30.310 slat (nsec): min=1370, max=8396.1k, avg=98255.94, stdev=572847.40 00:08:30.310 clat (usec): min=1141, max=22652, avg=12586.32, stdev=2443.81 00:08:30.310 lat (usec): min=3816, max=22654, avg=12684.58, stdev=2475.44 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10421], 00:08:30.310 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13304], 00:08:30.310 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15270], 95.00th=[16581], 00:08:30.310 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20841], 99.95th=[20841], 00:08:30.310 | 99.99th=[22676] 00:08:30.310 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:08:30.310 slat (usec): min=2, max=8917, avg=92.17, stdev=518.37 00:08:30.310 clat (usec): min=1519, max=21610, avg=12511.81, stdev=2122.67 00:08:30.310 lat (usec): min=1531, max=21614, avg=12603.98, stdev=2178.67 00:08:30.310 clat percentiles (usec): 00:08:30.310 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[11207], 00:08:30.310 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12780], 60.00th=[13042], 00:08:30.310 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[16319], 00:08:30.310 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:08:30.310 | 99.99th=[21627] 00:08:30.310 bw ( KiB/s): min=19728, max=21232, per=26.36%, avg=20480.00, stdev=1063.49, samples=2 00:08:30.310 iops : min= 4932, max= 5308, avg=5120.00, stdev=265.87, samples=2 00:08:30.310 lat (msec) : 2=0.03%, 4=0.15%, 10=11.56%, 20=87.51%, 50=0.75% 00:08:30.310 cpu : usr=4.79%, sys=6.58%, ctx=521, majf=0, minf=1 00:08:30.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:30.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:30.310 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:30.310 00:08:30.310 Run status group 0 (all jobs): 00:08:30.310 READ: bw=72.5MiB/s (76.1MB/s), 14.0MiB/s-21.3MiB/s (14.7MB/s-22.3MB/s), io=72.8MiB (76.4MB), run=1002-1004msec 00:08:30.310 WRITE: bw=75.9MiB/s (79.6MB/s), 14.8MiB/s-21.9MiB/s (15.5MB/s-23.0MB/s), io=76.2MiB (79.9MB), run=1002-1004msec 00:08:30.310 00:08:30.311 Disk stats (read/write): 00:08:30.311 nvme0n1: ios=4638/4807, merge=0/0, ticks=50387/50188, in_queue=100575, util=100.00% 00:08:30.311 nvme0n2: ios=3902/4096, merge=0/0, ticks=28571/40003, in_queue=68574, util=87.91% 00:08:30.311 nvme0n3: ios=2968/3072, merge=0/0, ticks=14934/17379, in_queue=32313, util=93.22% 00:08:30.311 nvme0n4: ios=4116/4514, merge=0/0, ticks=28704/30058, in_queue=58762, util=97.47% 00:08:30.311 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:30.311 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2703843 00:08:30.311 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:30.311 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:30.311 [global] 00:08:30.311 thread=1 00:08:30.311 invalidate=1 00:08:30.311 rw=read 00:08:30.311 time_based=1 00:08:30.311 runtime=10 00:08:30.311 ioengine=libaio 00:08:30.311 direct=1 00:08:30.311 bs=4096 00:08:30.311 iodepth=1 00:08:30.311 norandommap=1 00:08:30.311 numjobs=1 00:08:30.311 00:08:30.311 [job0] 00:08:30.311 filename=/dev/nvme0n1 00:08:30.311 [job1] 00:08:30.311 filename=/dev/nvme0n2 00:08:30.311 [job2] 00:08:30.311 filename=/dev/nvme0n3 00:08:30.311 [job3] 00:08:30.311 filename=/dev/nvme0n4 00:08:30.311 Could not set queue depth (nvme0n1) 00:08:30.311 Could not set queue depth (nvme0n2) 00:08:30.311 Could not set queue depth (nvme0n3) 00:08:30.311 Could not set queue depth (nvme0n4) 00:08:30.311 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.311 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.311 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.311 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.311 fio-3.35 00:08:30.311 Starting 4 threads 00:08:33.589 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:33.589 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:33.589 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=20701184, buflen=4096 00:08:33.589 fio: pid=2704197, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.589 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47951872, buflen=4096 00:08:33.589 fio: pid=2704196, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.589 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.589 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:33.847 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=30109696, buflen=4096 00:08:33.847 fio: pid=2704147, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.847 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.847 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:33.847 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51589120, buflen=4096 00:08:33.847 fio: pid=2704171, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.847 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.847 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:34.105 00:08:34.105 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2704147: Mon Nov 4 16:20:00 2024 00:08:34.105 read: IOPS=2328, BW=9311KiB/s (9534kB/s)(28.7MiB/3158msec) 00:08:34.105 slat (usec): min=6, max=16399, avg=13.50, stdev=283.82 00:08:34.105 clat (usec): min=181, max=42071, avg=412.02, stdev=2297.17 00:08:34.105 lat (usec): min=188, max=42080, avg=425.52, stdev=2315.29 00:08:34.105 clat percentiles (usec): 00:08:34.105 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 239], 00:08:34.105 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:08:34.105 | 70.00th=[ 262], 80.00th=[ 302], 90.00th=[ 424], 95.00th=[ 474], 00:08:34.105 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[41157], 99.95th=[41681], 00:08:34.105 | 99.99th=[42206] 00:08:34.105 bw ( KiB/s): min= 104, max=15520, per=20.92%, avg=9111.33, stdev=6769.16, samples=6 00:08:34.105 iops : min= 26, max= 3880, avg=2277.83, stdev=1692.29, samples=6 00:08:34.105 lat (usec) : 250=46.07%, 500=51.36%, 750=2.22% 00:08:34.105 lat (msec) : 2=0.01%, 50=0.33% 00:08:34.105 cpu : usr=0.60%, sys=2.06%, ctx=7356, majf=0, minf=1 00:08:34.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 issued rwts: total=7352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.105 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2704171: Mon Nov 4 16:20:00 2024 00:08:34.105 read: IOPS=3736, BW=14.6MiB/s (15.3MB/s)(49.2MiB/3371msec) 00:08:34.105 slat (usec): min=6, max=32167, avg=12.90, stdev=340.81 00:08:34.105 clat (usec): min=170, max=9745, avg=250.82, stdev=89.87 00:08:34.105 lat (usec): min=177, max=32888, avg=263.72, stdev=357.37 00:08:34.105 clat percentiles (usec): 00:08:34.105 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 219], 00:08:34.105 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:08:34.105 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:08:34.105 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 424], 99.95th=[ 469], 00:08:34.105 | 99.99th=[ 725] 00:08:34.105 bw ( KiB/s): min=14080, max=15616, per=34.28%, avg=14933.00, stdev=584.77, samples=6 00:08:34.105 iops : min= 3520, max= 3904, avg=3733.17, stdev=146.15, samples=6 00:08:34.105 lat (usec) : 250=43.36%, 500=56.61%, 750=0.02% 00:08:34.105 lat (msec) : 10=0.01% 00:08:34.105 cpu : usr=0.95%, sys=3.47%, ctx=12604, majf=0, minf=2 00:08:34.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 issued rwts: total=12596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.105 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2704196: Mon Nov 4 16:20:00 2024 00:08:34.105 read: IOPS=3989, BW=15.6MiB/s (16.3MB/s)(45.7MiB/2935msec) 00:08:34.105 slat (usec): min=4, max=14861, avg=10.85, stdev=168.96 00:08:34.105 clat (usec): min=171, max=784, avg=237.23, stdev=36.51 00:08:34.105 lat (usec): min=179, max=15181, avg=248.07, stdev=173.79 00:08:34.105 clat percentiles (usec): 00:08:34.105 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:08:34.105 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:08:34.105 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:08:34.105 | 99.00th=[ 453], 99.50th=[ 482], 99.90th=[ 545], 99.95th=[ 570], 00:08:34.105 | 99.99th=[ 709] 00:08:34.105 bw ( KiB/s): min=14800, max=17648, per=37.29%, avg=16244.80, stdev=1281.59, samples=5 00:08:34.105 iops : min= 3700, max= 4412, avg=4061.20, stdev=320.40, samples=5 00:08:34.105 lat (usec) : 250=72.92%, 500=26.87%, 750=0.19%, 1000=0.01% 00:08:34.105 cpu : usr=2.11%, sys=4.84%, ctx=11710, majf=0, minf=2 00:08:34.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.105 issued rwts: total=11708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.105 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2704197: Mon Nov 4 16:20:00 2024 00:08:34.105 read: IOPS=1839, BW=7357KiB/s (7533kB/s)(19.7MiB/2748msec) 00:08:34.105 slat (nsec): min=6412, max=32519, avg=7574.20, stdev=1677.89 00:08:34.105 clat (usec): min=211, max=42641, avg=530.42, stdev=3241.80 00:08:34.105 lat (usec): min=218, max=42650, avg=537.99, stdev=3242.88 00:08:34.105 clat percentiles (usec): 00:08:34.105 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:08:34.105 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:08:34.106 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:08:34.106 | 99.00th=[ 326], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:08:34.106 | 99.99th=[42730] 00:08:34.106 bw ( KiB/s): min= 96, max=14368, per=18.37%, avg=8000.00, stdev=7307.83, samples=5 00:08:34.106 iops : min= 24, max= 3592, avg=2000.00, stdev=1826.96, samples=5 00:08:34.106 lat (usec) : 250=12.86%, 500=86.49% 00:08:34.106 lat (msec) : 50=0.63% 00:08:34.106 cpu : usr=0.33%, sys=1.82%, ctx=5055, majf=0, minf=2 00:08:34.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.106 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.106 issued rwts: total=5055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.106 00:08:34.106 Run status group 0 (all jobs): 00:08:34.106 READ: bw=42.5MiB/s (44.6MB/s), 7357KiB/s-15.6MiB/s (7533kB/s-16.3MB/s), io=143MiB (150MB), run=2748-3371msec 00:08:34.106 00:08:34.106 Disk stats (read/write): 00:08:34.106 nvme0n1: ios=7031/0, merge=0/0, ticks=2918/0, in_queue=2918, util=92.91% 00:08:34.106 nvme0n2: ios=12487/0, merge=0/0, ticks=3845/0, in_queue=3845, util=97.32% 00:08:34.106 nvme0n3: ios=11240/0, merge=0/0, ticks=2584/0, in_queue=2584, util=95.38% 00:08:34.106 nvme0n4: ios=5049/0, merge=0/0, ticks=2488/0, in_queue=2488, util=96.39% 00:08:34.106 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:34.106 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:34.364 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:34.364 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:34.621 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:34.621 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:34.879 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:34.879 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:35.136 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:35.136 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2703843 00:08:35.136 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:35.137 nvmf hotplug test: fio failed as expected 00:08:35.137 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.394 rmmod nvme_tcp 00:08:35.394 rmmod nvme_fabrics 00:08:35.394 rmmod nvme_keyring 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2701128 ']' 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2701128 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2701128 ']' 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2701128 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701128 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701128' 00:08:35.394 killing process with pid 2701128 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2701128 00:08:35.394 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2701128 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.652 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.183 00:08:38.183 real 0m25.932s 00:08:38.183 user 1m46.645s 00:08:38.183 sys 0m8.125s 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:38.183 ************************************ 00:08:38.183 END TEST nvmf_fio_target 00:08:38.183 ************************************ 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.183 ************************************ 00:08:38.183 START TEST nvmf_bdevio 00:08:38.183 ************************************ 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:38.183 * Looking for test storage... 00:08:38.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.183 --rc genhtml_branch_coverage=1 00:08:38.183 --rc genhtml_function_coverage=1 00:08:38.183 --rc genhtml_legend=1 00:08:38.183 --rc geninfo_all_blocks=1 00:08:38.183 --rc geninfo_unexecuted_blocks=1 00:08:38.183 00:08:38.183 ' 00:08:38.183 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.184 --rc genhtml_branch_coverage=1 00:08:38.184 --rc genhtml_function_coverage=1 00:08:38.184 --rc genhtml_legend=1 00:08:38.184 --rc geninfo_all_blocks=1 00:08:38.184 --rc geninfo_unexecuted_blocks=1 00:08:38.184 00:08:38.184 ' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.184 --rc genhtml_branch_coverage=1 00:08:38.184 --rc genhtml_function_coverage=1 00:08:38.184 --rc genhtml_legend=1 00:08:38.184 --rc geninfo_all_blocks=1 00:08:38.184 --rc geninfo_unexecuted_blocks=1 00:08:38.184 00:08:38.184 ' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.184 --rc genhtml_branch_coverage=1 00:08:38.184 --rc genhtml_function_coverage=1 00:08:38.184 --rc genhtml_legend=1 00:08:38.184 --rc geninfo_all_blocks=1 00:08:38.184 --rc geninfo_unexecuted_blocks=1 00:08:38.184 00:08:38.184 ' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.184 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:43.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:43.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:43.444 Found net devices under 0000:86:00.0: cvl_0_0 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:43.444 Found net devices under 0000:86:00.1: cvl_0_1 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.444 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.445 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:08:43.445 00:08:43.445 --- 10.0.0.2 ping statistics --- 00:08:43.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.445 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:43.445 00:08:43.445 --- 10.0.0.1 ping statistics --- 00:08:43.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.445 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2708448 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2708448 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2708448 ']' 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.445 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.703 [2024-11-04 16:20:10.290323] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:08:43.703 [2024-11-04 16:20:10.290365] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.703 [2024-11-04 16:20:10.356782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.703 [2024-11-04 16:20:10.396828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.703 [2024-11-04 16:20:10.396867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.703 [2024-11-04 16:20:10.396874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.703 [2024-11-04 16:20:10.396880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.703 [2024-11-04 16:20:10.396886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.703 [2024-11-04 16:20:10.398412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:43.703 [2024-11-04 16:20:10.398500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:43.703 [2024-11-04 16:20:10.398629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.703 [2024-11-04 16:20:10.398629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:43.703 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.703 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:43.703 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.703 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.703 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 [2024-11-04 16:20:10.546110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 Malloc0 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.961 [2024-11-04 16:20:10.602738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:43.961 { 00:08:43.961 "params": { 00:08:43.961 "name": "Nvme$subsystem", 00:08:43.961 "trtype": "$TEST_TRANSPORT", 00:08:43.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.961 "adrfam": "ipv4", 00:08:43.961 "trsvcid": "$NVMF_PORT", 00:08:43.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.961 "hdgst": ${hdgst:-false}, 00:08:43.961 "ddgst": ${ddgst:-false} 00:08:43.961 }, 00:08:43.961 "method": "bdev_nvme_attach_controller" 00:08:43.961 } 00:08:43.961 EOF 00:08:43.961 )") 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:43.961 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:43.961 "params": { 00:08:43.961 "name": "Nvme1", 00:08:43.961 "trtype": "tcp", 00:08:43.961 "traddr": "10.0.0.2", 00:08:43.961 "adrfam": "ipv4", 00:08:43.961 "trsvcid": "4420", 00:08:43.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.961 "hdgst": false, 00:08:43.961 "ddgst": false 00:08:43.961 }, 00:08:43.961 "method": "bdev_nvme_attach_controller" 00:08:43.961 }' 00:08:43.961 [2024-11-04 16:20:10.651491] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:08:43.961 [2024-11-04 16:20:10.651534] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708471 ] 00:08:43.961 [2024-11-04 16:20:10.716131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.961 [2024-11-04 16:20:10.759898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.961 [2024-11-04 16:20:10.760002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.961 [2024-11-04 16:20:10.760003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.218 I/O targets: 00:08:44.218 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:44.218 00:08:44.218 00:08:44.218 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.218 http://cunit.sourceforge.net/ 00:08:44.218 00:08:44.218 00:08:44.218 Suite: bdevio tests on: Nvme1n1 00:08:44.218 Test: blockdev write read block ...passed 00:08:44.475 Test: blockdev write zeroes read block ...passed 00:08:44.475 Test: blockdev write zeroes read no split ...passed 00:08:44.475 Test: blockdev write zeroes read split ...passed 00:08:44.475 Test: blockdev write zeroes read split partial ...passed 00:08:44.475 Test: blockdev reset ...[2024-11-04 16:20:11.108243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:44.475 [2024-11-04 16:20:11.108314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca1340 (9): Bad file descriptor 00:08:44.475 [2024-11-04 16:20:11.204685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:44.475 passed 00:08:44.475 Test: blockdev write read 8 blocks ...passed 00:08:44.475 Test: blockdev write read size > 128k ...passed 00:08:44.475 Test: blockdev write read invalid size ...passed 00:08:44.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.475 Test: blockdev write read max offset ...passed 00:08:44.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.733 Test: blockdev writev readv 8 blocks ...passed 00:08:44.733 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.733 Test: blockdev writev readv block ...passed 00:08:44.733 Test: blockdev writev readv size > 128k ...passed 00:08:44.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.733 Test: blockdev comparev and writev ...[2024-11-04 16:20:11.375589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.375625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.375640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.375647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.375894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.375905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.375917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.375923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.376158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.376170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.376176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.376424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.376434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.376446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.733 [2024-11-04 16:20:11.376453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:44.733 passed 00:08:44.733 Test: blockdev nvme passthru rw ...passed 00:08:44.733 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:20:11.457969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.733 [2024-11-04 16:20:11.457985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.458092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.733 [2024-11-04 16:20:11.458102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.458220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.733 [2024-11-04 16:20:11.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:44.733 [2024-11-04 16:20:11.458344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.733 [2024-11-04 16:20:11.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:44.733 passed 00:08:44.733 Test: blockdev nvme admin passthru ...passed 00:08:44.733 Test: blockdev copy ...passed 00:08:44.733 00:08:44.733 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.733 suites 1 1 n/a 0 0 00:08:44.733 tests 23 23 23 0 0 00:08:44.733 asserts 152 152 152 0 n/a 00:08:44.733 00:08:44.733 Elapsed time = 1.112 seconds 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.991 rmmod nvme_tcp 00:08:44.991 rmmod nvme_fabrics 00:08:44.991 rmmod nvme_keyring 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2708448 ']' 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2708448 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2708448 ']' 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2708448 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708448 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708448' 00:08:44.991 killing process with pid 2708448 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2708448 00:08:44.991 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2708448 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.250 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.780 00:08:47.780 real 0m9.554s 00:08:47.780 user 0m9.860s 00:08:47.780 sys 0m4.665s 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:47.780 ************************************ 00:08:47.780 END TEST nvmf_bdevio 00:08:47.780 ************************************ 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:47.780 00:08:47.780 real 4m28.421s 00:08:47.780 user 10m14.248s 00:08:47.780 sys 1m33.170s 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.780 ************************************ 00:08:47.780 END TEST nvmf_target_core 00:08:47.780 ************************************ 00:08:47.780 16:20:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.780 16:20:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.780 16:20:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.780 16:20:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.780 ************************************ 00:08:47.780 START TEST nvmf_target_extra 00:08:47.780 ************************************ 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.780 * Looking for test storage... 00:08:47.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.780 --rc genhtml_branch_coverage=1 00:08:47.780 --rc genhtml_function_coverage=1 00:08:47.780 --rc genhtml_legend=1 00:08:47.780 --rc geninfo_all_blocks=1 00:08:47.780 --rc geninfo_unexecuted_blocks=1 00:08:47.780 00:08:47.780 ' 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.780 --rc genhtml_branch_coverage=1 00:08:47.780 --rc genhtml_function_coverage=1 00:08:47.780 --rc genhtml_legend=1 00:08:47.780 --rc geninfo_all_blocks=1 00:08:47.780 --rc geninfo_unexecuted_blocks=1 00:08:47.780 00:08:47.780 ' 00:08:47.780 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.780 --rc genhtml_branch_coverage=1 00:08:47.780 --rc genhtml_function_coverage=1 00:08:47.781 --rc genhtml_legend=1 00:08:47.781 --rc geninfo_all_blocks=1 00:08:47.781 --rc geninfo_unexecuted_blocks=1 00:08:47.781 00:08:47.781 ' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.781 --rc genhtml_branch_coverage=1 00:08:47.781 --rc genhtml_function_coverage=1 00:08:47.781 --rc genhtml_legend=1 00:08:47.781 --rc geninfo_all_blocks=1 00:08:47.781 --rc geninfo_unexecuted_blocks=1 00:08:47.781 00:08:47.781 ' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 ************************************ 00:08:47.781 START TEST nvmf_example 00:08:47.781 ************************************ 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:47.781 * Looking for test storage... 00:08:47.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.781 --rc genhtml_branch_coverage=1 00:08:47.781 --rc genhtml_function_coverage=1 00:08:47.781 --rc genhtml_legend=1 00:08:47.781 --rc geninfo_all_blocks=1 00:08:47.781 --rc geninfo_unexecuted_blocks=1 00:08:47.781 00:08:47.781 ' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.781 --rc genhtml_branch_coverage=1 00:08:47.781 --rc genhtml_function_coverage=1 00:08:47.781 --rc genhtml_legend=1 00:08:47.781 --rc geninfo_all_blocks=1 00:08:47.781 --rc geninfo_unexecuted_blocks=1 00:08:47.781 00:08:47.781 ' 00:08:47.781 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.781 --rc genhtml_branch_coverage=1 00:08:47.781 --rc genhtml_function_coverage=1 00:08:47.782 --rc genhtml_legend=1 00:08:47.782 --rc geninfo_all_blocks=1 00:08:47.782 --rc geninfo_unexecuted_blocks=1 00:08:47.782 00:08:47.782 ' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.782 --rc genhtml_branch_coverage=1 00:08:47.782 --rc genhtml_function_coverage=1 00:08:47.782 --rc genhtml_legend=1 00:08:47.782 --rc geninfo_all_blocks=1 00:08:47.782 --rc geninfo_unexecuted_blocks=1 00:08:47.782 00:08:47.782 ' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.782 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.333 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:54.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:54.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:54.334 Found net devices under 0000:86:00.0: cvl_0_0 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:54.334 Found net devices under 0000:86:00.1: cvl_0_1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:08:54.334 00:08:54.334 --- 10.0.0.2 ping statistics --- 00:08:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.334 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:08:54.334 00:08:54.334 --- 10.0.0.1 ping statistics --- 00:08:54.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.334 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2712310 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2712310 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2712310 ']' 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.334 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.335 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.593 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:54.851 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:07.053 Initializing NVMe Controllers 00:09:07.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.053 Initialization complete. Launching workers. 00:09:07.053 ======================================================== 00:09:07.053 Latency(us) 00:09:07.053 Device Information : IOPS MiB/s Average min max 00:09:07.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18332.83 71.61 3491.04 535.51 40959.17 00:09:07.053 ======================================================== 00:09:07.053 Total : 18332.83 71.61 3491.04 535.51 40959.17 00:09:07.053 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.053 rmmod nvme_tcp 00:09:07.053 rmmod nvme_fabrics 00:09:07.053 rmmod nvme_keyring 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2712310 ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2712310 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2712310 ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2712310 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712310 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712310' 00:09:07.053 killing process with pid 2712310 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2712310 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2712310 00:09:07.053 nvmf threads initialize successfully 00:09:07.053 bdev subsystem init successfully 00:09:07.053 created a nvmf target service 00:09:07.053 create targets's poll groups done 00:09:07.053 all subsystems of target started 00:09:07.053 nvmf target is running 00:09:07.053 all subsystems of target stopped 00:09:07.053 destroy targets's poll groups done 00:09:07.053 destroyed the nvmf target service 00:09:07.053 bdev subsystem finish successfully 00:09:07.053 nvmf threads destroy successfully 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.053 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.053 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.053 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.053 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.053 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.053 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.311 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.311 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:07.311 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.311 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:07.311 00:09:07.312 real 0m19.727s 00:09:07.312 user 0m46.163s 00:09:07.312 sys 0m6.026s 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:07.312 ************************************ 00:09:07.312 END TEST nvmf_example 00:09:07.312 ************************************ 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.312 16:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:07.573 ************************************ 00:09:07.573 START TEST nvmf_filesystem 00:09:07.573 ************************************ 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:07.573 * Looking for test storage... 00:09:07.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.573 --rc genhtml_branch_coverage=1 00:09:07.573 --rc genhtml_function_coverage=1 00:09:07.573 --rc genhtml_legend=1 00:09:07.573 --rc geninfo_all_blocks=1 00:09:07.573 --rc geninfo_unexecuted_blocks=1 00:09:07.573 00:09:07.573 ' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.573 --rc genhtml_branch_coverage=1 00:09:07.573 --rc genhtml_function_coverage=1 00:09:07.573 --rc genhtml_legend=1 00:09:07.573 --rc geninfo_all_blocks=1 00:09:07.573 --rc geninfo_unexecuted_blocks=1 00:09:07.573 00:09:07.573 ' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.573 --rc genhtml_branch_coverage=1 00:09:07.573 --rc genhtml_function_coverage=1 00:09:07.573 --rc genhtml_legend=1 00:09:07.573 --rc geninfo_all_blocks=1 00:09:07.573 --rc geninfo_unexecuted_blocks=1 00:09:07.573 00:09:07.573 ' 00:09:07.573 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.573 --rc genhtml_branch_coverage=1 00:09:07.573 --rc genhtml_function_coverage=1 00:09:07.573 --rc genhtml_legend=1 00:09:07.574 --rc geninfo_all_blocks=1 00:09:07.574 --rc geninfo_unexecuted_blocks=1 00:09:07.574 00:09:07.574 ' 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:07.574 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:07.575 #define SPDK_CONFIG_H 00:09:07.575 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:07.575 #define SPDK_CONFIG_APPS 1 00:09:07.575 #define SPDK_CONFIG_ARCH native 00:09:07.575 #undef SPDK_CONFIG_ASAN 00:09:07.575 #undef SPDK_CONFIG_AVAHI 00:09:07.575 #undef SPDK_CONFIG_CET 00:09:07.575 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:07.575 #define SPDK_CONFIG_COVERAGE 1 00:09:07.575 #define SPDK_CONFIG_CROSS_PREFIX 00:09:07.575 #undef SPDK_CONFIG_CRYPTO 00:09:07.575 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:07.575 #undef SPDK_CONFIG_CUSTOMOCF 00:09:07.575 #undef SPDK_CONFIG_DAOS 00:09:07.575 #define SPDK_CONFIG_DAOS_DIR 00:09:07.575 #define SPDK_CONFIG_DEBUG 1 00:09:07.575 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:07.575 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:07.575 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:07.575 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:07.575 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:07.575 #undef SPDK_CONFIG_DPDK_UADK 00:09:07.575 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:07.575 #define SPDK_CONFIG_EXAMPLES 1 00:09:07.575 #undef SPDK_CONFIG_FC 00:09:07.575 #define SPDK_CONFIG_FC_PATH 00:09:07.575 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:07.575 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:07.575 #define SPDK_CONFIG_FSDEV 1 00:09:07.575 #undef SPDK_CONFIG_FUSE 00:09:07.575 #undef SPDK_CONFIG_FUZZER 00:09:07.575 #define SPDK_CONFIG_FUZZER_LIB 00:09:07.575 #undef SPDK_CONFIG_GOLANG 00:09:07.575 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:07.575 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:07.575 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:07.575 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:07.575 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:07.575 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:07.575 #undef SPDK_CONFIG_HAVE_LZ4 00:09:07.575 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:07.575 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:07.575 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:07.575 #define SPDK_CONFIG_IDXD 1 00:09:07.575 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:07.575 #undef SPDK_CONFIG_IPSEC_MB 00:09:07.575 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:07.575 #define SPDK_CONFIG_ISAL 1 00:09:07.575 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:07.575 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:07.575 #define SPDK_CONFIG_LIBDIR 00:09:07.575 #undef SPDK_CONFIG_LTO 00:09:07.575 #define SPDK_CONFIG_MAX_LCORES 128 00:09:07.575 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:07.575 #define SPDK_CONFIG_NVME_CUSE 1 00:09:07.575 #undef SPDK_CONFIG_OCF 00:09:07.575 #define SPDK_CONFIG_OCF_PATH 00:09:07.575 #define SPDK_CONFIG_OPENSSL_PATH 00:09:07.575 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:07.575 #define SPDK_CONFIG_PGO_DIR 00:09:07.575 #undef SPDK_CONFIG_PGO_USE 00:09:07.575 #define SPDK_CONFIG_PREFIX /usr/local 00:09:07.575 #undef SPDK_CONFIG_RAID5F 00:09:07.575 #undef SPDK_CONFIG_RBD 00:09:07.575 #define SPDK_CONFIG_RDMA 1 00:09:07.575 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:07.575 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:07.575 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:07.575 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:07.575 #define SPDK_CONFIG_SHARED 1 00:09:07.575 #undef SPDK_CONFIG_SMA 00:09:07.575 #define SPDK_CONFIG_TESTS 1 00:09:07.575 #undef SPDK_CONFIG_TSAN 00:09:07.575 #define SPDK_CONFIG_UBLK 1 00:09:07.575 #define SPDK_CONFIG_UBSAN 1 00:09:07.575 #undef SPDK_CONFIG_UNIT_TESTS 00:09:07.575 #undef SPDK_CONFIG_URING 00:09:07.575 #define SPDK_CONFIG_URING_PATH 00:09:07.575 #undef SPDK_CONFIG_URING_ZNS 00:09:07.575 #undef SPDK_CONFIG_USDT 00:09:07.575 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:07.575 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:07.575 #define SPDK_CONFIG_VFIO_USER 1 00:09:07.575 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:07.575 #define SPDK_CONFIG_VHOST 1 00:09:07.575 #define SPDK_CONFIG_VIRTIO 1 00:09:07.575 #undef SPDK_CONFIG_VTUNE 00:09:07.575 #define SPDK_CONFIG_VTUNE_DIR 00:09:07.575 #define SPDK_CONFIG_WERROR 1 00:09:07.575 #define SPDK_CONFIG_WPDK_DIR 00:09:07.575 #undef SPDK_CONFIG_XNVME 00:09:07.575 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:07.575 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:07.576 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:07.836 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:07.837 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2714709 ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2714709 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.FJxMQr 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FJxMQr/tests/target /tmp/spdk.FJxMQr 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189102989312 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963949056 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6860959744 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970606080 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981972480 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:07.838 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981190144 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981976576 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=786432 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:07.839 * Looking for test storage... 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189102989312 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9075552256 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.839 --rc genhtml_branch_coverage=1 00:09:07.839 --rc genhtml_function_coverage=1 00:09:07.839 --rc genhtml_legend=1 00:09:07.839 --rc geninfo_all_blocks=1 00:09:07.839 --rc geninfo_unexecuted_blocks=1 00:09:07.839 00:09:07.839 ' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.839 --rc genhtml_branch_coverage=1 00:09:07.839 --rc genhtml_function_coverage=1 00:09:07.839 --rc genhtml_legend=1 00:09:07.839 --rc geninfo_all_blocks=1 00:09:07.839 --rc geninfo_unexecuted_blocks=1 00:09:07.839 00:09:07.839 ' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.839 --rc genhtml_branch_coverage=1 00:09:07.839 --rc genhtml_function_coverage=1 00:09:07.839 --rc genhtml_legend=1 00:09:07.839 --rc geninfo_all_blocks=1 00:09:07.839 --rc geninfo_unexecuted_blocks=1 00:09:07.839 00:09:07.839 ' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.839 --rc genhtml_branch_coverage=1 00:09:07.839 --rc genhtml_function_coverage=1 00:09:07.839 --rc genhtml_legend=1 00:09:07.839 --rc geninfo_all_blocks=1 00:09:07.839 --rc geninfo_unexecuted_blocks=1 00:09:07.839 00:09:07.839 ' 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.839 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.840 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.210 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:13.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:13.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:13.211 Found net devices under 0000:86:00.0: cvl_0_0 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:13.211 Found net devices under 0000:86:00.1: cvl_0_1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.211 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.212 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.212 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:09:13.469 00:09:13.469 --- 10.0.0.2 ping statistics --- 00:09:13.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.469 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:13.469 00:09:13.469 --- 10.0.0.1 ping statistics --- 00:09:13.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.469 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:13.469 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.470 ************************************ 00:09:13.470 START TEST nvmf_filesystem_no_in_capsule 00:09:13.470 ************************************ 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2717966 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2717966 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2717966 ']' 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.470 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.470 [2024-11-04 16:20:40.197298] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:09:13.470 [2024-11-04 16:20:40.197336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.470 [2024-11-04 16:20:40.264301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.727 [2024-11-04 16:20:40.305536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.727 [2024-11-04 16:20:40.305569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.727 [2024-11-04 16:20:40.305576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.727 [2024-11-04 16:20:40.305582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.727 [2024-11-04 16:20:40.305587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.727 [2024-11-04 16:20:40.307007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.727 [2024-11-04 16:20:40.307025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.727 [2024-11-04 16:20:40.307117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.727 [2024-11-04 16:20:40.307118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.727 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.727 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:13.727 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.727 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.727 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.728 [2024-11-04 16:20:40.455407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.728 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 Malloc1 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 [2024-11-04 16:20:40.599186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:13.986 { 00:09:13.986 "name": "Malloc1", 00:09:13.986 "aliases": [ 00:09:13.986 "43afe9a5-5778-4445-be11-a5dd26aa44a8" 00:09:13.986 ], 00:09:13.986 "product_name": "Malloc disk", 00:09:13.986 "block_size": 512, 00:09:13.986 "num_blocks": 1048576, 00:09:13.986 "uuid": "43afe9a5-5778-4445-be11-a5dd26aa44a8", 00:09:13.986 "assigned_rate_limits": { 00:09:13.986 "rw_ios_per_sec": 0, 00:09:13.986 "rw_mbytes_per_sec": 0, 00:09:13.986 "r_mbytes_per_sec": 0, 00:09:13.986 "w_mbytes_per_sec": 0 00:09:13.986 }, 00:09:13.986 "claimed": true, 00:09:13.986 "claim_type": "exclusive_write", 00:09:13.986 "zoned": false, 00:09:13.986 "supported_io_types": { 00:09:13.986 "read": true, 00:09:13.986 "write": true, 00:09:13.986 "unmap": true, 00:09:13.986 "flush": true, 00:09:13.986 "reset": true, 00:09:13.986 "nvme_admin": false, 00:09:13.986 "nvme_io": false, 00:09:13.986 "nvme_io_md": false, 00:09:13.986 "write_zeroes": true, 00:09:13.986 "zcopy": true, 00:09:13.986 "get_zone_info": false, 00:09:13.986 "zone_management": false, 00:09:13.986 "zone_append": false, 00:09:13.986 "compare": false, 00:09:13.986 "compare_and_write": false, 00:09:13.986 "abort": true, 00:09:13.986 "seek_hole": false, 00:09:13.986 "seek_data": false, 00:09:13.986 "copy": true, 00:09:13.986 "nvme_iov_md": false 00:09:13.986 }, 00:09:13.986 "memory_domains": [ 00:09:13.986 { 00:09:13.986 "dma_device_id": "system", 00:09:13.986 "dma_device_type": 1 00:09:13.986 }, 00:09:13.986 { 00:09:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.986 "dma_device_type": 2 00:09:13.986 } 00:09:13.986 ], 00:09:13.986 "driver_specific": {} 00:09:13.986 } 00:09:13.986 ]' 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:13.986 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.354 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.354 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:15.354 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.354 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:15.354 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:17.249 16:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:17.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:17.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.186 ************************************ 00:09:19.186 START TEST filesystem_ext4 00:09:19.186 ************************************ 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:19.186 mke2fs 1.47.0 (5-Feb-2023) 00:09:19.186 Discarding device blocks: 0/522240 done 00:09:19.186 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:19.186 Filesystem UUID: 45d8549a-f9bf-4c67-a013-6e02ece1fdb0 00:09:19.186 Superblock backups stored on blocks: 00:09:19.186 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:19.186 00:09:19.186 Allocating group tables: 0/64 done 00:09:19.186 Writing inode tables: 0/64 done 00:09:19.186 Creating journal (8192 blocks): done 00:09:19.186 Writing superblocks and filesystem accounting information: 0/64 done 00:09:19.186 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:19.186 16:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2717966 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.443 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.701 00:09:24.701 real 0m5.667s 00:09:24.701 user 0m0.040s 00:09:24.701 sys 0m0.057s 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:24.701 ************************************ 00:09:24.701 END TEST filesystem_ext4 00:09:24.701 ************************************ 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.701 ************************************ 00:09:24.701 START TEST filesystem_btrfs 00:09:24.701 ************************************ 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:24.701 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:24.958 btrfs-progs v6.8.1 00:09:24.958 See https://btrfs.readthedocs.io for more information. 00:09:24.958 00:09:24.958 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:24.958 NOTE: several default settings have changed in version 5.15, please make sure 00:09:24.958 this does not affect your deployments: 00:09:24.958 - DUP for metadata (-m dup) 00:09:24.958 - enabled no-holes (-O no-holes) 00:09:24.958 - enabled free-space-tree (-R free-space-tree) 00:09:24.958 00:09:24.958 Label: (null) 00:09:24.958 UUID: 79ebf195-db08-4804-a8c9-d60a78fe70c4 00:09:24.958 Node size: 16384 00:09:24.958 Sector size: 4096 (CPU page size: 4096) 00:09:24.958 Filesystem size: 510.00MiB 00:09:24.958 Block group profiles: 00:09:24.958 Data: single 8.00MiB 00:09:24.958 Metadata: DUP 32.00MiB 00:09:24.958 System: DUP 8.00MiB 00:09:24.958 SSD detected: yes 00:09:24.958 Zoned device: no 00:09:24.958 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:24.958 Checksum: crc32c 00:09:24.958 Number of devices: 1 00:09:24.958 Devices: 00:09:24.958 ID SIZE PATH 00:09:24.958 1 510.00MiB /dev/nvme0n1p1 00:09:24.958 00:09:24.958 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:24.958 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:25.521 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:25.521 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:25.521 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:25.521 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2717966 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:25.779 00:09:25.779 real 0m1.048s 00:09:25.779 user 0m0.026s 00:09:25.779 sys 0m0.118s 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 ************************************ 00:09:25.779 END TEST filesystem_btrfs 00:09:25.779 ************************************ 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 ************************************ 00:09:25.779 START TEST filesystem_xfs 00:09:25.779 ************************************ 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:25.779 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:25.779 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:25.779 = sectsz=512 attr=2, projid32bit=1 00:09:25.779 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:25.779 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:25.779 data = bsize=4096 blocks=130560, imaxpct=25 00:09:25.779 = sunit=0 swidth=0 blks 00:09:25.779 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:25.779 log =internal log bsize=4096 blocks=16384, version=2 00:09:25.779 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:25.779 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:27.149 Discarding blocks...Done. 00:09:27.149 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:27.149 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2717966 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:29.043 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:29.044 00:09:29.044 real 0m2.988s 00:09:29.044 user 0m0.024s 00:09:29.044 sys 0m0.075s 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:29.044 ************************************ 00:09:29.044 END TEST filesystem_xfs 00:09:29.044 ************************************ 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:29.044 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2717966 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2717966 ']' 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2717966 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2717966 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2717966' 00:09:29.301 killing process with pid 2717966 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2717966 00:09:29.301 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2717966 00:09:29.559 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:29.560 00:09:29.560 real 0m16.134s 00:09:29.560 user 1m3.554s 00:09:29.560 sys 0m1.334s 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.560 ************************************ 00:09:29.560 END TEST nvmf_filesystem_no_in_capsule 00:09:29.560 ************************************ 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.560 ************************************ 00:09:29.560 START TEST nvmf_filesystem_in_capsule 00:09:29.560 ************************************ 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2720733 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2720733 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2720733 ']' 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.560 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.817 [2024-11-04 16:20:56.409079] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:09:29.817 [2024-11-04 16:20:56.409125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.817 [2024-11-04 16:20:56.479446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.817 [2024-11-04 16:20:56.518258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.817 [2024-11-04 16:20:56.518297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.818 [2024-11-04 16:20:56.518305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.818 [2024-11-04 16:20:56.518311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.818 [2024-11-04 16:20:56.518315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.818 [2024-11-04 16:20:56.519717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.818 [2024-11-04 16:20:56.519818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.818 [2024-11-04 16:20:56.519902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.818 [2024-11-04 16:20:56.519903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.818 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.818 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:29.818 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.818 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.818 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 [2024-11-04 16:20:56.668598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.075 [2024-11-04 16:20:56.814773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:30.075 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:30.076 { 00:09:30.076 "name": "Malloc1", 00:09:30.076 "aliases": [ 00:09:30.076 "c3ab0051-2eb4-4ed8-a32e-0936ba9583a8" 00:09:30.076 ], 00:09:30.076 "product_name": "Malloc disk", 00:09:30.076 "block_size": 512, 00:09:30.076 "num_blocks": 1048576, 00:09:30.076 "uuid": "c3ab0051-2eb4-4ed8-a32e-0936ba9583a8", 00:09:30.076 "assigned_rate_limits": { 00:09:30.076 "rw_ios_per_sec": 0, 00:09:30.076 "rw_mbytes_per_sec": 0, 00:09:30.076 "r_mbytes_per_sec": 0, 00:09:30.076 "w_mbytes_per_sec": 0 00:09:30.076 }, 00:09:30.076 "claimed": true, 00:09:30.076 "claim_type": "exclusive_write", 00:09:30.076 "zoned": false, 00:09:30.076 "supported_io_types": { 00:09:30.076 "read": true, 00:09:30.076 "write": true, 00:09:30.076 "unmap": true, 00:09:30.076 "flush": true, 00:09:30.076 "reset": true, 00:09:30.076 "nvme_admin": false, 00:09:30.076 "nvme_io": false, 00:09:30.076 "nvme_io_md": false, 00:09:30.076 "write_zeroes": true, 00:09:30.076 "zcopy": true, 00:09:30.076 "get_zone_info": false, 00:09:30.076 "zone_management": false, 00:09:30.076 "zone_append": false, 00:09:30.076 "compare": false, 00:09:30.076 "compare_and_write": false, 00:09:30.076 "abort": true, 00:09:30.076 "seek_hole": false, 00:09:30.076 "seek_data": false, 00:09:30.076 "copy": true, 00:09:30.076 "nvme_iov_md": false 00:09:30.076 }, 00:09:30.076 "memory_domains": [ 00:09:30.076 { 00:09:30.076 "dma_device_id": "system", 00:09:30.076 "dma_device_type": 1 00:09:30.076 }, 00:09:30.076 { 00:09:30.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.076 "dma_device_type": 2 00:09:30.076 } 00:09:30.076 ], 00:09:30.076 "driver_specific": {} 00:09:30.076 } 00:09:30.076 ]' 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:30.076 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:30.333 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:30.333 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:30.333 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:30.333 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:30.333 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.702 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.702 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:31.702 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.702 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:31.702 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.596 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.596 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.596 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.596 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:33.597 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:33.853 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:34.417 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.347 ************************************ 00:09:35.347 START TEST filesystem_in_capsule_ext4 00:09:35.347 ************************************ 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:35.347 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:35.604 mke2fs 1.47.0 (5-Feb-2023) 00:09:35.604 Discarding device blocks: 0/522240 done 00:09:35.604 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:35.604 Filesystem UUID: 8a341af2-2729-455b-ada7-a1fff04170ba 00:09:35.604 Superblock backups stored on blocks: 00:09:35.604 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:35.604 00:09:35.604 Allocating group tables: 0/64 done 00:09:35.604 Writing inode tables: 0/64 done 00:09:36.168 Creating journal (8192 blocks): done 00:09:38.468 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:09:38.468 00:09:38.468 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:38.468 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2720733 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:43.727 00:09:43.727 real 0m8.367s 00:09:43.727 user 0m0.030s 00:09:43.727 sys 0m0.072s 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.727 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:43.727 ************************************ 00:09:43.727 END TEST filesystem_in_capsule_ext4 00:09:43.727 ************************************ 00:09:43.984 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:43.984 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.984 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.984 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.984 ************************************ 00:09:43.985 START TEST filesystem_in_capsule_btrfs 00:09:43.985 ************************************ 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:43.985 btrfs-progs v6.8.1 00:09:43.985 See https://btrfs.readthedocs.io for more information. 00:09:43.985 00:09:43.985 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:43.985 NOTE: several default settings have changed in version 5.15, please make sure 00:09:43.985 this does not affect your deployments: 00:09:43.985 - DUP for metadata (-m dup) 00:09:43.985 - enabled no-holes (-O no-holes) 00:09:43.985 - enabled free-space-tree (-R free-space-tree) 00:09:43.985 00:09:43.985 Label: (null) 00:09:43.985 UUID: fb8f2e09-0411-406b-8f3b-d9c4c6e6444c 00:09:43.985 Node size: 16384 00:09:43.985 Sector size: 4096 (CPU page size: 4096) 00:09:43.985 Filesystem size: 510.00MiB 00:09:43.985 Block group profiles: 00:09:43.985 Data: single 8.00MiB 00:09:43.985 Metadata: DUP 32.00MiB 00:09:43.985 System: DUP 8.00MiB 00:09:43.985 SSD detected: yes 00:09:43.985 Zoned device: no 00:09:43.985 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:43.985 Checksum: crc32c 00:09:43.985 Number of devices: 1 00:09:43.985 Devices: 00:09:43.985 ID SIZE PATH 00:09:43.985 1 510.00MiB /dev/nvme0n1p1 00:09:43.985 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:43.985 16:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2720733 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:44.915 00:09:44.915 real 0m0.995s 00:09:44.915 user 0m0.032s 00:09:44.915 sys 0m0.110s 00:09:44.915 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:44.916 ************************************ 00:09:44.916 END TEST filesystem_in_capsule_btrfs 00:09:44.916 ************************************ 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.916 ************************************ 00:09:44.916 START TEST filesystem_in_capsule_xfs 00:09:44.916 ************************************ 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:44.916 16:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:45.172 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:45.172 = sectsz=512 attr=2, projid32bit=1 00:09:45.172 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:45.172 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:45.172 data = bsize=4096 blocks=130560, imaxpct=25 00:09:45.172 = sunit=0 swidth=0 blks 00:09:45.172 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:45.172 log =internal log bsize=4096 blocks=16384, version=2 00:09:45.172 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:45.172 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:46.102 Discarding blocks...Done. 00:09:46.102 16:21:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:46.102 16:21:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:48.624 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:48.625 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:48.625 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:48.625 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:48.625 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:48.625 16:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2720733 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:48.625 00:09:48.625 real 0m3.360s 00:09:48.625 user 0m0.034s 00:09:48.625 sys 0m0.066s 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:48.625 ************************************ 00:09:48.625 END TEST filesystem_in_capsule_xfs 00:09:48.625 ************************************ 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:48.625 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2720733 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2720733 ']' 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2720733 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.882 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720733 00:09:49.139 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.139 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.139 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720733' 00:09:49.139 killing process with pid 2720733 00:09:49.139 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2720733 00:09:49.139 16:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2720733 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:49.397 00:09:49.397 real 0m19.694s 00:09:49.397 user 1m17.634s 00:09:49.397 sys 0m1.417s 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.397 ************************************ 00:09:49.397 END TEST nvmf_filesystem_in_capsule 00:09:49.397 ************************************ 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.397 rmmod nvme_tcp 00:09:49.397 rmmod nvme_fabrics 00:09:49.397 rmmod nvme_keyring 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.397 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.930 00:09:51.930 real 0m44.043s 00:09:51.930 user 2m23.050s 00:09:51.930 sys 0m7.095s 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 ************************************ 00:09:51.930 END TEST nvmf_filesystem 00:09:51.930 ************************************ 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 ************************************ 00:09:51.930 START TEST nvmf_target_discovery 00:09:51.930 ************************************ 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:51.930 * Looking for test storage... 00:09:51.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.930 --rc genhtml_branch_coverage=1 00:09:51.930 --rc genhtml_function_coverage=1 00:09:51.930 --rc genhtml_legend=1 00:09:51.930 --rc geninfo_all_blocks=1 00:09:51.930 --rc geninfo_unexecuted_blocks=1 00:09:51.930 00:09:51.930 ' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.930 --rc genhtml_branch_coverage=1 00:09:51.930 --rc genhtml_function_coverage=1 00:09:51.930 --rc genhtml_legend=1 00:09:51.930 --rc geninfo_all_blocks=1 00:09:51.930 --rc geninfo_unexecuted_blocks=1 00:09:51.930 00:09:51.930 ' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.930 --rc genhtml_branch_coverage=1 00:09:51.930 --rc genhtml_function_coverage=1 00:09:51.930 --rc genhtml_legend=1 00:09:51.930 --rc geninfo_all_blocks=1 00:09:51.930 --rc geninfo_unexecuted_blocks=1 00:09:51.930 00:09:51.930 ' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.930 --rc genhtml_branch_coverage=1 00:09:51.930 --rc genhtml_function_coverage=1 00:09:51.930 --rc genhtml_legend=1 00:09:51.930 --rc geninfo_all_blocks=1 00:09:51.930 --rc geninfo_unexecuted_blocks=1 00:09:51.930 00:09:51.930 ' 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.930 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.931 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:57.194 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:57.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:57.194 Found net devices under 0000:86:00.0: cvl_0_0 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:57.194 Found net devices under 0000:86:00.1: cvl_0_1 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.194 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.195 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:09:57.452 00:09:57.452 --- 10.0.0.2 ping statistics --- 00:09:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.452 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:57.452 00:09:57.452 --- 10.0.0.1 ping statistics --- 00:09:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.452 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2728206 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2728206 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2728206 ']' 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.452 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.452 [2024-11-04 16:21:24.183351] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:09:57.452 [2024-11-04 16:21:24.183399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.452 [2024-11-04 16:21:24.249494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.710 [2024-11-04 16:21:24.292898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.710 [2024-11-04 16:21:24.292934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.710 [2024-11-04 16:21:24.292941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.710 [2024-11-04 16:21:24.292948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.710 [2024-11-04 16:21:24.292953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.710 [2024-11-04 16:21:24.294494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.710 [2024-11-04 16:21:24.294518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.710 [2024-11-04 16:21:24.294583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.710 [2024-11-04 16:21:24.294581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 [2024-11-04 16:21:24.431162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 Null1 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 [2024-11-04 16:21:24.484591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 Null2 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.710 Null3 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:57.710 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 Null4 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.967 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:57.968 00:09:57.968 Discovery Log Number of Records 6, Generation counter 6 00:09:57.968 =====Discovery Log Entry 0====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: current discovery subsystem 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4420 00:09:57.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: explicit discovery connections, duplicate discovery information 00:09:57.968 sectype: none 00:09:57.968 =====Discovery Log Entry 1====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: nvme subsystem 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4420 00:09:57.968 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: none 00:09:57.968 sectype: none 00:09:57.968 =====Discovery Log Entry 2====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: nvme subsystem 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4420 00:09:57.968 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: none 00:09:57.968 sectype: none 00:09:57.968 =====Discovery Log Entry 3====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: nvme subsystem 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4420 00:09:57.968 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: none 00:09:57.968 sectype: none 00:09:57.968 =====Discovery Log Entry 4====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: nvme subsystem 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4420 00:09:57.968 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: none 00:09:57.968 sectype: none 00:09:57.968 =====Discovery Log Entry 5====== 00:09:57.968 trtype: tcp 00:09:57.968 adrfam: ipv4 00:09:57.968 subtype: discovery subsystem referral 00:09:57.968 treq: not required 00:09:57.968 portid: 0 00:09:57.968 trsvcid: 4430 00:09:57.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:57.968 traddr: 10.0.0.2 00:09:57.968 eflags: none 00:09:57.968 sectype: none 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:57.968 Perform nvmf subsystem discovery via RPC 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.968 [ 00:09:57.968 { 00:09:57.968 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:57.968 "subtype": "Discovery", 00:09:57.968 "listen_addresses": [ 00:09:57.968 { 00:09:57.968 "trtype": "TCP", 00:09:57.968 "adrfam": "IPv4", 00:09:57.968 "traddr": "10.0.0.2", 00:09:57.968 "trsvcid": "4420" 00:09:57.968 } 00:09:57.968 ], 00:09:57.968 "allow_any_host": true, 00:09:57.968 "hosts": [] 00:09:57.968 }, 00:09:57.968 { 00:09:57.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.968 "subtype": "NVMe", 00:09:57.968 "listen_addresses": [ 00:09:57.968 { 00:09:57.968 "trtype": "TCP", 00:09:57.968 "adrfam": "IPv4", 00:09:57.968 "traddr": "10.0.0.2", 00:09:57.968 "trsvcid": "4420" 00:09:57.968 } 00:09:57.968 ], 00:09:57.968 "allow_any_host": true, 00:09:57.968 "hosts": [], 00:09:57.968 "serial_number": "SPDK00000000000001", 00:09:57.968 "model_number": "SPDK bdev Controller", 00:09:57.968 "max_namespaces": 32, 00:09:57.968 "min_cntlid": 1, 00:09:57.968 "max_cntlid": 65519, 00:09:57.968 "namespaces": [ 00:09:57.968 { 00:09:57.968 "nsid": 1, 00:09:57.968 "bdev_name": "Null1", 00:09:57.968 "name": "Null1", 00:09:57.968 "nguid": "172F1734330E468699387D8415148769", 00:09:57.968 "uuid": "172f1734-330e-4686-9938-7d8415148769" 00:09:57.968 } 00:09:57.968 ] 00:09:57.968 }, 00:09:57.968 { 00:09:57.968 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.968 "subtype": "NVMe", 00:09:57.968 "listen_addresses": [ 00:09:57.968 { 00:09:57.968 "trtype": "TCP", 00:09:57.968 "adrfam": "IPv4", 00:09:57.968 "traddr": "10.0.0.2", 00:09:57.968 "trsvcid": "4420" 00:09:57.968 } 00:09:57.968 ], 00:09:57.968 "allow_any_host": true, 00:09:57.968 "hosts": [], 00:09:57.968 "serial_number": "SPDK00000000000002", 00:09:57.968 "model_number": "SPDK bdev Controller", 00:09:57.968 "max_namespaces": 32, 00:09:57.968 "min_cntlid": 1, 00:09:57.968 "max_cntlid": 65519, 00:09:57.968 "namespaces": [ 00:09:57.968 { 00:09:57.968 "nsid": 1, 00:09:57.968 "bdev_name": "Null2", 00:09:57.968 "name": "Null2", 00:09:57.968 "nguid": "B8DF182F7AC14AC68AF400B21B750356", 00:09:57.968 "uuid": "b8df182f-7ac1-4ac6-8af4-00b21b750356" 00:09:57.968 } 00:09:57.968 ] 00:09:57.968 }, 00:09:57.968 { 00:09:57.968 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:57.968 "subtype": "NVMe", 00:09:57.968 "listen_addresses": [ 00:09:57.968 { 00:09:57.968 "trtype": "TCP", 00:09:57.968 "adrfam": "IPv4", 00:09:57.968 "traddr": "10.0.0.2", 00:09:57.968 "trsvcid": "4420" 00:09:57.968 } 00:09:57.968 ], 00:09:57.968 "allow_any_host": true, 00:09:57.968 "hosts": [], 00:09:57.968 "serial_number": "SPDK00000000000003", 00:09:57.968 "model_number": "SPDK bdev Controller", 00:09:57.968 "max_namespaces": 32, 00:09:57.968 "min_cntlid": 1, 00:09:57.968 "max_cntlid": 65519, 00:09:57.968 "namespaces": [ 00:09:57.968 { 00:09:57.968 "nsid": 1, 00:09:57.968 "bdev_name": "Null3", 00:09:57.968 "name": "Null3", 00:09:57.968 "nguid": "357067711279484784C9E9E0DC2D6C3E", 00:09:57.968 "uuid": "35706771-1279-4847-84c9-e9e0dc2d6c3e" 00:09:57.968 } 00:09:57.968 ] 00:09:57.968 }, 00:09:57.968 { 00:09:57.968 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:57.968 "subtype": "NVMe", 00:09:57.968 "listen_addresses": [ 00:09:57.968 { 00:09:57.968 "trtype": "TCP", 00:09:57.968 "adrfam": "IPv4", 00:09:57.968 "traddr": "10.0.0.2", 00:09:57.968 "trsvcid": "4420" 00:09:57.968 } 00:09:57.968 ], 00:09:57.968 "allow_any_host": true, 00:09:57.968 "hosts": [], 00:09:57.968 "serial_number": "SPDK00000000000004", 00:09:57.968 "model_number": "SPDK bdev Controller", 00:09:57.968 "max_namespaces": 32, 00:09:57.968 "min_cntlid": 1, 00:09:57.968 "max_cntlid": 65519, 00:09:57.968 "namespaces": [ 00:09:57.968 { 00:09:57.968 "nsid": 1, 00:09:57.968 "bdev_name": "Null4", 00:09:57.968 "name": "Null4", 00:09:57.968 "nguid": "8C11A7A0039C407B853B680128BAC662", 00:09:57.968 "uuid": "8c11a7a0-039c-407b-853b-680128bac662" 00:09:57.968 } 00:09:57.968 ] 00:09:57.968 } 00:09:57.968 ] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.968 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:58.226 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.227 rmmod nvme_tcp 00:09:58.227 rmmod nvme_fabrics 00:09:58.227 rmmod nvme_keyring 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2728206 ']' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2728206 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2728206 ']' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2728206 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.227 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728206 00:09:58.227 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.227 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.227 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728206' 00:09:58.227 killing process with pid 2728206 00:09:58.227 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2728206 00:09:58.227 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2728206 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.486 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.016 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.016 00:10:01.016 real 0m8.979s 00:10:01.016 user 0m5.350s 00:10:01.016 sys 0m4.573s 00:10:01.016 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.016 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.016 ************************************ 00:10:01.016 END TEST nvmf_target_discovery 00:10:01.016 ************************************ 00:10:01.016 16:21:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:01.017 ************************************ 00:10:01.017 START TEST nvmf_referrals 00:10:01.017 ************************************ 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:01.017 * Looking for test storage... 00:10:01.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.017 --rc genhtml_branch_coverage=1 00:10:01.017 --rc genhtml_function_coverage=1 00:10:01.017 --rc genhtml_legend=1 00:10:01.017 --rc geninfo_all_blocks=1 00:10:01.017 --rc geninfo_unexecuted_blocks=1 00:10:01.017 00:10:01.017 ' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.017 --rc genhtml_branch_coverage=1 00:10:01.017 --rc genhtml_function_coverage=1 00:10:01.017 --rc genhtml_legend=1 00:10:01.017 --rc geninfo_all_blocks=1 00:10:01.017 --rc geninfo_unexecuted_blocks=1 00:10:01.017 00:10:01.017 ' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.017 --rc genhtml_branch_coverage=1 00:10:01.017 --rc genhtml_function_coverage=1 00:10:01.017 --rc genhtml_legend=1 00:10:01.017 --rc geninfo_all_blocks=1 00:10:01.017 --rc geninfo_unexecuted_blocks=1 00:10:01.017 00:10:01.017 ' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.017 --rc genhtml_branch_coverage=1 00:10:01.017 --rc genhtml_function_coverage=1 00:10:01.017 --rc genhtml_legend=1 00:10:01.017 --rc geninfo_all_blocks=1 00:10:01.017 --rc geninfo_unexecuted_blocks=1 00:10:01.017 00:10:01.017 ' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.017 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.018 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.293 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.293 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.293 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:10:06.294 00:10:06.294 --- 10.0.0.2 ping statistics --- 00:10:06.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.294 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:06.294 00:10:06.294 --- 10.0.0.1 ping statistics --- 00:10:06.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.294 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2731849 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2731849 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2731849 ']' 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.294 16:21:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.294 [2024-11-04 16:21:33.008831] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:10:06.294 [2024-11-04 16:21:33.008876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.294 [2024-11-04 16:21:33.079776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.552 [2024-11-04 16:21:33.125073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.552 [2024-11-04 16:21:33.125110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.552 [2024-11-04 16:21:33.125118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.552 [2024-11-04 16:21:33.125124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.552 [2024-11-04 16:21:33.125130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.552 [2024-11-04 16:21:33.126707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.552 [2024-11-04 16:21:33.126804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.552 [2024-11-04 16:21:33.126870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.552 [2024-11-04 16:21:33.126871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 [2024-11-04 16:21:33.270562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 [2024-11-04 16:21:33.283859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:06.552 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.809 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.066 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.327 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.327 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.666 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.961 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.235 16:21:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.492 rmmod nvme_tcp 00:10:08.492 rmmod nvme_fabrics 00:10:08.492 rmmod nvme_keyring 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2731849 ']' 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2731849 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2731849 ']' 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2731849 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731849 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731849' 00:10:08.492 killing process with pid 2731849 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2731849 00:10:08.492 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2731849 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.751 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.281 00:10:11.281 real 0m10.207s 00:10:11.281 user 0m11.620s 00:10:11.281 sys 0m4.864s 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.281 ************************************ 00:10:11.281 END TEST nvmf_referrals 00:10:11.281 ************************************ 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.281 ************************************ 00:10:11.281 START TEST nvmf_connect_disconnect 00:10:11.281 ************************************ 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:11.281 * Looking for test storage... 00:10:11.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.281 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.282 --rc genhtml_branch_coverage=1 00:10:11.282 --rc genhtml_function_coverage=1 00:10:11.282 --rc genhtml_legend=1 00:10:11.282 --rc geninfo_all_blocks=1 00:10:11.282 --rc geninfo_unexecuted_blocks=1 00:10:11.282 00:10:11.282 ' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.282 --rc genhtml_branch_coverage=1 00:10:11.282 --rc genhtml_function_coverage=1 00:10:11.282 --rc genhtml_legend=1 00:10:11.282 --rc geninfo_all_blocks=1 00:10:11.282 --rc geninfo_unexecuted_blocks=1 00:10:11.282 00:10:11.282 ' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.282 --rc genhtml_branch_coverage=1 00:10:11.282 --rc genhtml_function_coverage=1 00:10:11.282 --rc genhtml_legend=1 00:10:11.282 --rc geninfo_all_blocks=1 00:10:11.282 --rc geninfo_unexecuted_blocks=1 00:10:11.282 00:10:11.282 ' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.282 --rc genhtml_branch_coverage=1 00:10:11.282 --rc genhtml_function_coverage=1 00:10:11.282 --rc genhtml_legend=1 00:10:11.282 --rc geninfo_all_blocks=1 00:10:11.282 --rc geninfo_unexecuted_blocks=1 00:10:11.282 00:10:11.282 ' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.282 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.283 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.544 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:16.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:16.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:16.545 Found net devices under 0000:86:00.0: cvl_0_0 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:16.545 Found net devices under 0000:86:00.1: cvl_0_1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.545 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.545 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.545 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:10:16.546 00:10:16.546 --- 10.0.0.2 ping statistics --- 00:10:16.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.546 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:16.546 00:10:16.546 --- 10.0.0.1 ping statistics --- 00:10:16.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.546 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2735841 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2735841 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2735841 ']' 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.546 [2024-11-04 16:21:43.159164] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:10:16.546 [2024-11-04 16:21:43.159209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.546 [2024-11-04 16:21:43.226398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.546 [2024-11-04 16:21:43.269526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.546 [2024-11-04 16:21:43.269561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.546 [2024-11-04 16:21:43.269568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.546 [2024-11-04 16:21:43.269574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.546 [2024-11-04 16:21:43.269579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.546 [2024-11-04 16:21:43.270980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.546 [2024-11-04 16:21:43.271084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.546 [2024-11-04 16:21:43.271172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.546 [2024-11-04 16:21:43.271173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.546 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.804 [2024-11-04 16:21:43.408116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.804 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 [2024-11-04 16:21:43.470007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:16.805 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:20.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.199 rmmod nvme_tcp 00:10:33.199 rmmod nvme_fabrics 00:10:33.199 rmmod nvme_keyring 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2735841 ']' 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2735841 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2735841 ']' 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2735841 00:10:33.199 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2735841 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2735841' 00:10:33.200 killing process with pid 2735841 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2735841 00:10:33.200 16:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2735841 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.459 16:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.993 00:10:35.993 real 0m24.593s 00:10:35.993 user 1m8.383s 00:10:35.993 sys 0m5.367s 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:35.993 ************************************ 00:10:35.993 END TEST nvmf_connect_disconnect 00:10:35.993 ************************************ 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.993 ************************************ 00:10:35.993 START TEST nvmf_multitarget 00:10:35.993 ************************************ 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:35.993 * Looking for test storage... 00:10:35.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.993 --rc genhtml_branch_coverage=1 00:10:35.993 --rc genhtml_function_coverage=1 00:10:35.993 --rc genhtml_legend=1 00:10:35.993 --rc geninfo_all_blocks=1 00:10:35.993 --rc geninfo_unexecuted_blocks=1 00:10:35.993 00:10:35.993 ' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.993 --rc genhtml_branch_coverage=1 00:10:35.993 --rc genhtml_function_coverage=1 00:10:35.993 --rc genhtml_legend=1 00:10:35.993 --rc geninfo_all_blocks=1 00:10:35.993 --rc geninfo_unexecuted_blocks=1 00:10:35.993 00:10:35.993 ' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.993 --rc genhtml_branch_coverage=1 00:10:35.993 --rc genhtml_function_coverage=1 00:10:35.993 --rc genhtml_legend=1 00:10:35.993 --rc geninfo_all_blocks=1 00:10:35.993 --rc geninfo_unexecuted_blocks=1 00:10:35.993 00:10:35.993 ' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.993 --rc genhtml_branch_coverage=1 00:10:35.993 --rc genhtml_function_coverage=1 00:10:35.993 --rc genhtml_legend=1 00:10:35.993 --rc geninfo_all_blocks=1 00:10:35.993 --rc geninfo_unexecuted_blocks=1 00:10:35.993 00:10:35.993 ' 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.993 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.994 16:22:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:41.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:41.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.260 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:41.261 Found net devices under 0000:86:00.0: cvl_0_0 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:41.261 Found net devices under 0000:86:00.1: cvl_0_1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:10:41.261 00:10:41.261 --- 10.0.0.2 ping statistics --- 00:10:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.261 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:41.261 00:10:41.261 --- 10.0.0.1 ping statistics --- 00:10:41.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.261 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2742019 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2742019 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2742019 ']' 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.261 [2024-11-04 16:22:07.819332] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:10:41.261 [2024-11-04 16:22:07.819377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.261 [2024-11-04 16:22:07.884185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.261 [2024-11-04 16:22:07.926696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.261 [2024-11-04 16:22:07.926732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.261 [2024-11-04 16:22:07.926745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.261 [2024-11-04 16:22:07.926751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.261 [2024-11-04 16:22:07.926756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.261 [2024-11-04 16:22:07.928215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.261 [2024-11-04 16:22:07.928313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.261 [2024-11-04 16:22:07.928402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.261 [2024-11-04 16:22:07.928403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:41.261 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:41.519 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:41.519 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:41.519 "nvmf_tgt_1" 00:10:41.520 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:41.777 "nvmf_tgt_2" 00:10:41.777 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:41.777 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:41.777 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:41.777 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:41.777 true 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:42.035 true 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.035 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.035 rmmod nvme_tcp 00:10:42.035 rmmod nvme_fabrics 00:10:42.035 rmmod nvme_keyring 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2742019 ']' 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2742019 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2742019 ']' 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2742019 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742019 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742019' 00:10:42.294 killing process with pid 2742019 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2742019 00:10:42.294 16:22:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2742019 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.294 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.826 00:10:44.826 real 0m8.885s 00:10:44.826 user 0m6.840s 00:10:44.826 sys 0m4.396s 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:44.826 ************************************ 00:10:44.826 END TEST nvmf_multitarget 00:10:44.826 ************************************ 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.826 ************************************ 00:10:44.826 START TEST nvmf_rpc 00:10:44.826 ************************************ 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:44.826 * Looking for test storage... 00:10:44.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.826 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.827 --rc genhtml_branch_coverage=1 00:10:44.827 --rc genhtml_function_coverage=1 00:10:44.827 --rc genhtml_legend=1 00:10:44.827 --rc geninfo_all_blocks=1 00:10:44.827 --rc geninfo_unexecuted_blocks=1 00:10:44.827 00:10:44.827 ' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.827 --rc genhtml_branch_coverage=1 00:10:44.827 --rc genhtml_function_coverage=1 00:10:44.827 --rc genhtml_legend=1 00:10:44.827 --rc geninfo_all_blocks=1 00:10:44.827 --rc geninfo_unexecuted_blocks=1 00:10:44.827 00:10:44.827 ' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.827 --rc genhtml_branch_coverage=1 00:10:44.827 --rc genhtml_function_coverage=1 00:10:44.827 --rc genhtml_legend=1 00:10:44.827 --rc geninfo_all_blocks=1 00:10:44.827 --rc geninfo_unexecuted_blocks=1 00:10:44.827 00:10:44.827 ' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.827 --rc genhtml_branch_coverage=1 00:10:44.827 --rc genhtml_function_coverage=1 00:10:44.827 --rc genhtml_legend=1 00:10:44.827 --rc geninfo_all_blocks=1 00:10:44.827 --rc geninfo_unexecuted_blocks=1 00:10:44.827 00:10:44.827 ' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.827 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.094 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.094 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.094 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.095 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:10:50.095 00:10:50.095 --- 10.0.0.2 ping statistics --- 00:10:50.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.095 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:10:50.353 00:10:50.353 --- 10.0.0.1 ping statistics --- 00:10:50.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.353 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2745801 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2745801 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2745801 ']' 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.353 16:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.353 [2024-11-04 16:22:17.020452] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:10:50.353 [2024-11-04 16:22:17.020504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.353 [2024-11-04 16:22:17.086844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.353 [2024-11-04 16:22:17.130361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.353 [2024-11-04 16:22:17.130399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.353 [2024-11-04 16:22:17.130406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.353 [2024-11-04 16:22:17.130412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.353 [2024-11-04 16:22:17.130417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.353 [2024-11-04 16:22:17.131957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.353 [2024-11-04 16:22:17.132052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.353 [2024-11-04 16:22:17.132142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.353 [2024-11-04 16:22:17.132143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:50.611 "tick_rate": 2100000000, 00:10:50.611 "poll_groups": [ 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_000", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_001", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_002", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_003", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [] 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 }' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.611 [2024-11-04 16:22:17.381159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:50.611 "tick_rate": 2100000000, 00:10:50.611 "poll_groups": [ 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_000", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [ 00:10:50.611 { 00:10:50.611 "trtype": "TCP" 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_001", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [ 00:10:50.611 { 00:10:50.611 "trtype": "TCP" 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_002", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [ 00:10:50.611 { 00:10:50.611 "trtype": "TCP" 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 }, 00:10:50.611 { 00:10:50.611 "name": "nvmf_tgt_poll_group_003", 00:10:50.611 "admin_qpairs": 0, 00:10:50.611 "io_qpairs": 0, 00:10:50.611 "current_admin_qpairs": 0, 00:10:50.611 "current_io_qpairs": 0, 00:10:50.611 "pending_bdev_io": 0, 00:10:50.611 "completed_nvme_io": 0, 00:10:50.611 "transports": [ 00:10:50.611 { 00:10:50.611 "trtype": "TCP" 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 } 00:10:50.611 ] 00:10:50.611 }' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:50.611 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 Malloc1 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 [2024-11-04 16:22:17.560768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:50.869 [2024-11-04 16:22:17.589263] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:10:50.869 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:50.869 could not add new controller: failed to write to nvme-fabrics device 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.869 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.870 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:50.870 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.870 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.870 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.870 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.242 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.242 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.242 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.242 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:52.242 16:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:54.140 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.140 [2024-11-04 16:22:20.952796] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:10:54.398 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:54.398 could not add new controller: failed to write to nvme-fabrics device 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.398 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.771 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.771 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.771 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.771 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:55.771 16:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.668 [2024-11-04 16:22:24.319852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.668 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.669 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.669 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.669 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.669 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.041 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.041 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.041 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.041 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.041 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.940 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.941 [2024-11-04 16:22:27.619002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.941 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.313 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.313 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.313 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.313 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.313 16:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 [2024-11-04 16:22:30.967636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.214 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.587 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.587 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.587 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.587 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.587 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.484 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.745 [2024-11-04 16:22:34.319299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.745 16:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.762 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.762 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.762 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.762 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:08.762 16:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 [2024-11-04 16:22:37.698705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.302 16:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.241 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.241 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.241 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.241 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:12.241 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:14.141 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.399 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.399 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.399 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.399 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 [2024-11-04 16:22:41.010693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.399 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 [2024-11-04 16:22:41.058714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 [2024-11-04 16:22:41.106839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 [2024-11-04 16:22:41.154996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.400 [2024-11-04 16:22:41.203164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.400 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.401 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.401 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.401 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.401 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.658 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:14.658 "tick_rate": 2100000000, 00:11:14.658 "poll_groups": [ 00:11:14.658 { 00:11:14.658 "name": "nvmf_tgt_poll_group_000", 00:11:14.658 "admin_qpairs": 2, 00:11:14.658 "io_qpairs": 168, 00:11:14.658 "current_admin_qpairs": 0, 00:11:14.658 "current_io_qpairs": 0, 00:11:14.658 "pending_bdev_io": 0, 00:11:14.658 "completed_nvme_io": 219, 00:11:14.658 "transports": [ 00:11:14.658 { 00:11:14.658 "trtype": "TCP" 00:11:14.658 } 00:11:14.658 ] 00:11:14.658 }, 00:11:14.658 { 00:11:14.658 "name": "nvmf_tgt_poll_group_001", 00:11:14.658 "admin_qpairs": 2, 00:11:14.658 "io_qpairs": 168, 00:11:14.658 "current_admin_qpairs": 0, 00:11:14.658 "current_io_qpairs": 0, 00:11:14.658 "pending_bdev_io": 0, 00:11:14.658 "completed_nvme_io": 268, 00:11:14.658 "transports": [ 00:11:14.658 { 00:11:14.658 "trtype": "TCP" 00:11:14.658 } 00:11:14.659 ] 00:11:14.659 }, 00:11:14.659 { 00:11:14.659 "name": "nvmf_tgt_poll_group_002", 00:11:14.659 "admin_qpairs": 1, 00:11:14.659 "io_qpairs": 168, 00:11:14.659 "current_admin_qpairs": 0, 00:11:14.659 "current_io_qpairs": 0, 00:11:14.659 "pending_bdev_io": 0, 00:11:14.659 "completed_nvme_io": 318, 00:11:14.659 "transports": [ 00:11:14.659 { 00:11:14.659 "trtype": "TCP" 00:11:14.659 } 00:11:14.659 ] 00:11:14.659 }, 00:11:14.659 { 00:11:14.659 "name": "nvmf_tgt_poll_group_003", 00:11:14.659 "admin_qpairs": 2, 00:11:14.659 "io_qpairs": 168, 00:11:14.659 "current_admin_qpairs": 0, 00:11:14.659 "current_io_qpairs": 0, 00:11:14.659 "pending_bdev_io": 0, 00:11:14.659 "completed_nvme_io": 217, 00:11:14.659 "transports": [ 00:11:14.659 { 00:11:14.659 "trtype": "TCP" 00:11:14.659 } 00:11:14.659 ] 00:11:14.659 } 00:11:14.659 ] 00:11:14.659 }' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.659 rmmod nvme_tcp 00:11:14.659 rmmod nvme_fabrics 00:11:14.659 rmmod nvme_keyring 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2745801 ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2745801 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2745801 ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2745801 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2745801 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2745801' 00:11:14.659 killing process with pid 2745801 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2745801 00:11:14.659 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2745801 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.917 16:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.449 00:11:17.449 real 0m32.497s 00:11:17.449 user 1m39.275s 00:11:17.449 sys 0m6.205s 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.449 ************************************ 00:11:17.449 END TEST nvmf_rpc 00:11:17.449 ************************************ 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.449 ************************************ 00:11:17.449 START TEST nvmf_invalid 00:11:17.449 ************************************ 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:17.449 * Looking for test storage... 00:11:17.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.449 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.450 --rc genhtml_branch_coverage=1 00:11:17.450 --rc genhtml_function_coverage=1 00:11:17.450 --rc genhtml_legend=1 00:11:17.450 --rc geninfo_all_blocks=1 00:11:17.450 --rc geninfo_unexecuted_blocks=1 00:11:17.450 00:11:17.450 ' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.450 --rc genhtml_branch_coverage=1 00:11:17.450 --rc genhtml_function_coverage=1 00:11:17.450 --rc genhtml_legend=1 00:11:17.450 --rc geninfo_all_blocks=1 00:11:17.450 --rc geninfo_unexecuted_blocks=1 00:11:17.450 00:11:17.450 ' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.450 --rc genhtml_branch_coverage=1 00:11:17.450 --rc genhtml_function_coverage=1 00:11:17.450 --rc genhtml_legend=1 00:11:17.450 --rc geninfo_all_blocks=1 00:11:17.450 --rc geninfo_unexecuted_blocks=1 00:11:17.450 00:11:17.450 ' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.450 --rc genhtml_branch_coverage=1 00:11:17.450 --rc genhtml_function_coverage=1 00:11:17.450 --rc genhtml_legend=1 00:11:17.450 --rc geninfo_all_blocks=1 00:11:17.450 --rc geninfo_unexecuted_blocks=1 00:11:17.450 00:11:17.450 ' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.450 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.450 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.451 16:22:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.721 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:22.722 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:22.722 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:22.722 Found net devices under 0000:86:00.0: cvl_0_0 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:22.722 Found net devices under 0000:86:00.1: cvl_0_1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.722 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:11:22.983 00:11:22.983 --- 10.0.0.2 ping statistics --- 00:11:22.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.983 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:22.983 00:11:22.983 --- 10.0.0.1 ping statistics --- 00:11:22.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.983 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2753412 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2753412 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2753412 ']' 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.983 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.983 [2024-11-04 16:22:49.716726] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:11:22.983 [2024-11-04 16:22:49.716772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.983 [2024-11-04 16:22:49.784524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.241 [2024-11-04 16:22:49.828088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.241 [2024-11-04 16:22:49.828124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.241 [2024-11-04 16:22:49.828131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.241 [2024-11-04 16:22:49.828137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.241 [2024-11-04 16:22:49.828142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.241 [2024-11-04 16:22:49.829530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.241 [2024-11-04 16:22:49.829656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.241 [2024-11-04 16:22:49.829682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.241 [2024-11-04 16:22:49.829683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:23.241 16:22:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3194 00:11:23.499 [2024-11-04 16:22:50.142979] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:23.499 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:23.499 { 00:11:23.499 "nqn": "nqn.2016-06.io.spdk:cnode3194", 00:11:23.499 "tgt_name": "foobar", 00:11:23.499 "method": "nvmf_create_subsystem", 00:11:23.499 "req_id": 1 00:11:23.499 } 00:11:23.499 Got JSON-RPC error response 00:11:23.499 response: 00:11:23.499 { 00:11:23.499 "code": -32603, 00:11:23.499 "message": "Unable to find target foobar" 00:11:23.499 }' 00:11:23.499 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:23.499 { 00:11:23.499 "nqn": "nqn.2016-06.io.spdk:cnode3194", 00:11:23.499 "tgt_name": "foobar", 00:11:23.499 "method": "nvmf_create_subsystem", 00:11:23.499 "req_id": 1 00:11:23.499 } 00:11:23.499 Got JSON-RPC error response 00:11:23.499 response: 00:11:23.499 { 00:11:23.499 "code": -32603, 00:11:23.499 "message": "Unable to find target foobar" 00:11:23.499 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:23.499 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:23.499 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13638 00:11:23.757 [2024-11-04 16:22:50.355732] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13638: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:23.757 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:23.757 { 00:11:23.757 "nqn": "nqn.2016-06.io.spdk:cnode13638", 00:11:23.757 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:23.757 "method": "nvmf_create_subsystem", 00:11:23.757 "req_id": 1 00:11:23.757 } 00:11:23.757 Got JSON-RPC error response 00:11:23.757 response: 00:11:23.757 { 00:11:23.757 "code": -32602, 00:11:23.757 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:23.757 }' 00:11:23.757 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:23.757 { 00:11:23.757 "nqn": "nqn.2016-06.io.spdk:cnode13638", 00:11:23.757 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:23.757 "method": "nvmf_create_subsystem", 00:11:23.757 "req_id": 1 00:11:23.757 } 00:11:23.757 Got JSON-RPC error response 00:11:23.757 response: 00:11:23.757 { 00:11:23.757 "code": -32602, 00:11:23.757 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:23.757 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:23.757 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:23.757 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24767 00:11:23.757 [2024-11-04 16:22:50.560399] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24767: invalid model number 'SPDK_Controller' 00:11:24.015 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:24.015 { 00:11:24.015 "nqn": "nqn.2016-06.io.spdk:cnode24767", 00:11:24.015 "model_number": "SPDK_Controller\u001f", 00:11:24.015 "method": "nvmf_create_subsystem", 00:11:24.015 "req_id": 1 00:11:24.015 } 00:11:24.015 Got JSON-RPC error response 00:11:24.015 response: 00:11:24.015 { 00:11:24.015 "code": -32602, 00:11:24.015 "message": "Invalid MN SPDK_Controller\u001f" 00:11:24.015 }' 00:11:24.015 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:24.015 { 00:11:24.015 "nqn": "nqn.2016-06.io.spdk:cnode24767", 00:11:24.015 "model_number": "SPDK_Controller\u001f", 00:11:24.015 "method": "nvmf_create_subsystem", 00:11:24.015 "req_id": 1 00:11:24.015 } 00:11:24.015 Got JSON-RPC error response 00:11:24.015 response: 00:11:24.015 { 00:11:24.015 "code": -32602, 00:11:24.015 "message": "Invalid MN SPDK_Controller\u001f" 00:11:24.015 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:24.015 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:24.015 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'I>AA-HJ+fP+=tvehtQ5@' 00:11:24.016 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I>AA-HJ+fP+=tvehtQ5@' nqn.2016-06.io.spdk:cnode15677 00:11:24.275 [2024-11-04 16:22:50.909594] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15677: invalid serial number 'I>AA-HJ+fP+=tvehtQ5@' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:24.275 { 00:11:24.275 "nqn": "nqn.2016-06.io.spdk:cnode15677", 00:11:24.275 "serial_number": "I>AA-HJ+fP\u007f+=tvehtQ5@", 00:11:24.275 "method": "nvmf_create_subsystem", 00:11:24.275 "req_id": 1 00:11:24.275 } 00:11:24.275 Got JSON-RPC error response 00:11:24.275 response: 00:11:24.275 { 00:11:24.275 "code": -32602, 00:11:24.275 "message": "Invalid SN I>AA-HJ+fP\u007f+=tvehtQ5@" 00:11:24.275 }' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:24.275 { 00:11:24.275 "nqn": "nqn.2016-06.io.spdk:cnode15677", 00:11:24.275 "serial_number": "I>AA-HJ+fP\u007f+=tvehtQ5@", 00:11:24.275 "method": "nvmf_create_subsystem", 00:11:24.275 "req_id": 1 00:11:24.275 } 00:11:24.275 Got JSON-RPC error response 00:11:24.275 response: 00:11:24.275 { 00:11:24.275 "code": -32602, 00:11:24.275 "message": "Invalid SN I>AA-HJ+fP\u007f+=tvehtQ5@" 00:11:24.275 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:24.275 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:24.275 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:24.276 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.534 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:11:24.535 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~[- /dev/null' 00:11:26.859 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.762 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.762 00:11:28.762 real 0m11.704s 00:11:28.762 user 0m18.511s 00:11:28.763 sys 0m5.198s 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:28.763 ************************************ 00:11:28.763 END TEST nvmf_invalid 00:11:28.763 ************************************ 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.763 ************************************ 00:11:28.763 START TEST nvmf_connect_stress 00:11:28.763 ************************************ 00:11:28.763 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:29.022 * Looking for test storage... 00:11:29.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.022 --rc genhtml_branch_coverage=1 00:11:29.022 --rc genhtml_function_coverage=1 00:11:29.022 --rc genhtml_legend=1 00:11:29.022 --rc geninfo_all_blocks=1 00:11:29.022 --rc geninfo_unexecuted_blocks=1 00:11:29.022 00:11:29.022 ' 00:11:29.022 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.022 --rc genhtml_branch_coverage=1 00:11:29.022 --rc genhtml_function_coverage=1 00:11:29.022 --rc genhtml_legend=1 00:11:29.022 --rc geninfo_all_blocks=1 00:11:29.023 --rc geninfo_unexecuted_blocks=1 00:11:29.023 00:11:29.023 ' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.023 --rc genhtml_branch_coverage=1 00:11:29.023 --rc genhtml_function_coverage=1 00:11:29.023 --rc genhtml_legend=1 00:11:29.023 --rc geninfo_all_blocks=1 00:11:29.023 --rc geninfo_unexecuted_blocks=1 00:11:29.023 00:11:29.023 ' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.023 --rc genhtml_branch_coverage=1 00:11:29.023 --rc genhtml_function_coverage=1 00:11:29.023 --rc genhtml_legend=1 00:11:29.023 --rc geninfo_all_blocks=1 00:11:29.023 --rc geninfo_unexecuted_blocks=1 00:11:29.023 00:11:29.023 ' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.023 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.589 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.589 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.589 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:11:35.590 00:11:35.590 --- 10.0.0.2 ping statistics --- 00:11:35.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.590 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:35.590 00:11:35.590 --- 10.0.0.1 ping statistics --- 00:11:35.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.590 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2757796 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2757796 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2757796 ']' 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 [2024-11-04 16:23:01.686366] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:11:35.590 [2024-11-04 16:23:01.686409] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.590 [2024-11-04 16:23:01.753365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.590 [2024-11-04 16:23:01.795012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.590 [2024-11-04 16:23:01.795050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.590 [2024-11-04 16:23:01.795058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.590 [2024-11-04 16:23:01.795063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.590 [2024-11-04 16:23:01.795069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.590 [2024-11-04 16:23:01.796408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.590 [2024-11-04 16:23:01.796492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.590 [2024-11-04 16:23:01.796494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 [2024-11-04 16:23:01.939984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 [2024-11-04 16:23:01.960203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 NULL1 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2757818 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.590 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.591 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.156 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.156 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:36.156 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.156 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.156 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:36.414 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.414 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.671 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.671 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:36.671 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.671 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.671 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.928 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.928 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:36.928 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.928 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.928 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.493 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.493 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:37.493 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.493 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.493 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.751 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.751 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:37.751 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.751 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.751 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.008 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.008 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:38.008 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.008 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.008 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.266 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.266 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:38.266 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.266 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.266 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.523 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.523 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:38.523 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.523 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.523 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.088 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.088 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:39.088 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.088 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.088 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.346 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.346 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:39.346 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.346 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.346 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.603 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.603 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:39.603 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.603 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.603 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.862 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.862 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:39.862 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.862 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.862 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.120 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.120 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:40.120 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.120 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.120 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.685 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.685 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:40.685 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.685 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.685 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.943 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.943 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:40.943 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.943 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.943 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.200 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.200 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:41.200 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.200 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.200 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.457 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.457 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:41.457 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.457 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.457 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.023 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.023 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:42.023 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.023 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.023 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.281 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.281 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:42.281 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.281 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.281 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.539 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.539 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:42.539 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.539 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.539 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.796 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.796 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:42.796 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.796 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.796 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.053 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.053 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:43.053 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.053 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.053 16:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.618 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.618 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:43.618 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.618 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.618 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.876 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.876 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:43.876 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.876 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.876 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.134 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.134 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:44.134 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.134 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.134 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.391 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.391 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:44.391 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.391 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.391 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.957 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.957 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:44.957 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.957 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.957 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.215 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.215 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:45.215 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.215 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.215 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.472 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.472 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:45.472 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.472 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.472 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.472 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757818 00:11:45.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2757818) - No such process 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2757818 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.730 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.730 rmmod nvme_tcp 00:11:45.730 rmmod nvme_fabrics 00:11:45.730 rmmod nvme_keyring 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2757796 ']' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2757796 ']' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757796' 00:11:45.989 killing process with pid 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2757796 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.989 16:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.523 00:11:48.523 real 0m19.280s 00:11:48.523 user 0m40.384s 00:11:48.523 sys 0m8.522s 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.523 ************************************ 00:11:48.523 END TEST nvmf_connect_stress 00:11:48.523 ************************************ 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.523 ************************************ 00:11:48.523 START TEST nvmf_fused_ordering 00:11:48.523 ************************************ 00:11:48.523 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:48.523 * Looking for test storage... 00:11:48.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.523 --rc genhtml_branch_coverage=1 00:11:48.523 --rc genhtml_function_coverage=1 00:11:48.523 --rc genhtml_legend=1 00:11:48.523 --rc geninfo_all_blocks=1 00:11:48.523 --rc geninfo_unexecuted_blocks=1 00:11:48.523 00:11:48.523 ' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.523 --rc genhtml_branch_coverage=1 00:11:48.523 --rc genhtml_function_coverage=1 00:11:48.523 --rc genhtml_legend=1 00:11:48.523 --rc geninfo_all_blocks=1 00:11:48.523 --rc geninfo_unexecuted_blocks=1 00:11:48.523 00:11:48.523 ' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.523 --rc genhtml_branch_coverage=1 00:11:48.523 --rc genhtml_function_coverage=1 00:11:48.523 --rc genhtml_legend=1 00:11:48.523 --rc geninfo_all_blocks=1 00:11:48.523 --rc geninfo_unexecuted_blocks=1 00:11:48.523 00:11:48.523 ' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.523 --rc genhtml_branch_coverage=1 00:11:48.523 --rc genhtml_function_coverage=1 00:11:48.523 --rc genhtml_legend=1 00:11:48.523 --rc geninfo_all_blocks=1 00:11:48.523 --rc geninfo_unexecuted_blocks=1 00:11:48.523 00:11:48.523 ' 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.523 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.524 16:23:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:53.791 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:53.791 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.791 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:53.792 Found net devices under 0000:86:00.0: cvl_0_0 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:53.792 Found net devices under 0000:86:00.1: cvl_0_1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.792 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:11:53.792 00:11:53.792 --- 10.0.0.2 ping statistics --- 00:11:53.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.792 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:53.792 00:11:53.792 --- 10.0.0.1 ping statistics --- 00:11:53.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.792 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2762972 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2762972 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2762972 ']' 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 [2024-11-04 16:23:20.156985] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:11:53.792 [2024-11-04 16:23:20.157031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.792 [2024-11-04 16:23:20.224527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.792 [2024-11-04 16:23:20.265431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.792 [2024-11-04 16:23:20.265468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.792 [2024-11-04 16:23:20.265475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.792 [2024-11-04 16:23:20.265481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.792 [2024-11-04 16:23:20.265486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.792 [2024-11-04 16:23:20.266037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 [2024-11-04 16:23:20.400128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.792 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 [2024-11-04 16:23:20.416298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 NULL1 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:53.793 [2024-11-04 16:23:20.470585] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:11:53.793 [2024-11-04 16:23:20.470626] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762994 ] 00:11:54.051 Attached to nqn.2016-06.io.spdk:cnode1 00:11:54.051 Namespace ID: 1 size: 1GB 00:11:54.051 fused_ordering(0) 00:11:54.051 fused_ordering(1) 00:11:54.051 fused_ordering(2) 00:11:54.051 fused_ordering(3) 00:11:54.051 fused_ordering(4) 00:11:54.051 fused_ordering(5) 00:11:54.051 fused_ordering(6) 00:11:54.051 fused_ordering(7) 00:11:54.051 fused_ordering(8) 00:11:54.051 fused_ordering(9) 00:11:54.051 fused_ordering(10) 00:11:54.051 fused_ordering(11) 00:11:54.051 fused_ordering(12) 00:11:54.051 fused_ordering(13) 00:11:54.051 fused_ordering(14) 00:11:54.051 fused_ordering(15) 00:11:54.051 fused_ordering(16) 00:11:54.051 fused_ordering(17) 00:11:54.051 fused_ordering(18) 00:11:54.051 fused_ordering(19) 00:11:54.051 fused_ordering(20) 00:11:54.051 fused_ordering(21) 00:11:54.051 fused_ordering(22) 00:11:54.051 fused_ordering(23) 00:11:54.051 fused_ordering(24) 00:11:54.051 fused_ordering(25) 00:11:54.051 fused_ordering(26) 00:11:54.051 fused_ordering(27) 00:11:54.051 fused_ordering(28) 00:11:54.051 fused_ordering(29) 00:11:54.051 fused_ordering(30) 00:11:54.051 fused_ordering(31) 00:11:54.051 fused_ordering(32) 00:11:54.051 fused_ordering(33) 00:11:54.051 fused_ordering(34) 00:11:54.051 fused_ordering(35) 00:11:54.051 fused_ordering(36) 00:11:54.051 fused_ordering(37) 00:11:54.051 fused_ordering(38) 00:11:54.051 fused_ordering(39) 00:11:54.051 fused_ordering(40) 00:11:54.051 fused_ordering(41) 00:11:54.051 fused_ordering(42) 00:11:54.051 fused_ordering(43) 00:11:54.051 fused_ordering(44) 00:11:54.051 fused_ordering(45) 00:11:54.051 fused_ordering(46) 00:11:54.051 fused_ordering(47) 00:11:54.051 fused_ordering(48) 00:11:54.051 fused_ordering(49) 00:11:54.051 fused_ordering(50) 00:11:54.051 fused_ordering(51) 00:11:54.051 fused_ordering(52) 00:11:54.051 fused_ordering(53) 00:11:54.051 fused_ordering(54) 00:11:54.051 fused_ordering(55) 00:11:54.051 fused_ordering(56) 00:11:54.051 fused_ordering(57) 00:11:54.051 fused_ordering(58) 00:11:54.051 fused_ordering(59) 00:11:54.051 fused_ordering(60) 00:11:54.051 fused_ordering(61) 00:11:54.051 fused_ordering(62) 00:11:54.051 fused_ordering(63) 00:11:54.051 fused_ordering(64) 00:11:54.051 fused_ordering(65) 00:11:54.051 fused_ordering(66) 00:11:54.051 fused_ordering(67) 00:11:54.051 fused_ordering(68) 00:11:54.051 fused_ordering(69) 00:11:54.051 fused_ordering(70) 00:11:54.051 fused_ordering(71) 00:11:54.051 fused_ordering(72) 00:11:54.051 fused_ordering(73) 00:11:54.051 fused_ordering(74) 00:11:54.051 fused_ordering(75) 00:11:54.051 fused_ordering(76) 00:11:54.051 fused_ordering(77) 00:11:54.051 fused_ordering(78) 00:11:54.051 fused_ordering(79) 00:11:54.051 fused_ordering(80) 00:11:54.051 fused_ordering(81) 00:11:54.051 fused_ordering(82) 00:11:54.051 fused_ordering(83) 00:11:54.051 fused_ordering(84) 00:11:54.051 fused_ordering(85) 00:11:54.051 fused_ordering(86) 00:11:54.051 fused_ordering(87) 00:11:54.051 fused_ordering(88) 00:11:54.051 fused_ordering(89) 00:11:54.051 fused_ordering(90) 00:11:54.051 fused_ordering(91) 00:11:54.051 fused_ordering(92) 00:11:54.051 fused_ordering(93) 00:11:54.051 fused_ordering(94) 00:11:54.051 fused_ordering(95) 00:11:54.051 fused_ordering(96) 00:11:54.051 fused_ordering(97) 00:11:54.051 fused_ordering(98) 00:11:54.051 fused_ordering(99) 00:11:54.051 fused_ordering(100) 00:11:54.051 fused_ordering(101) 00:11:54.051 fused_ordering(102) 00:11:54.051 fused_ordering(103) 00:11:54.051 fused_ordering(104) 00:11:54.051 fused_ordering(105) 00:11:54.051 fused_ordering(106) 00:11:54.051 fused_ordering(107) 00:11:54.051 fused_ordering(108) 00:11:54.051 fused_ordering(109) 00:11:54.051 fused_ordering(110) 00:11:54.051 fused_ordering(111) 00:11:54.051 fused_ordering(112) 00:11:54.051 fused_ordering(113) 00:11:54.051 fused_ordering(114) 00:11:54.051 fused_ordering(115) 00:11:54.051 fused_ordering(116) 00:11:54.051 fused_ordering(117) 00:11:54.051 fused_ordering(118) 00:11:54.051 fused_ordering(119) 00:11:54.051 fused_ordering(120) 00:11:54.051 fused_ordering(121) 00:11:54.051 fused_ordering(122) 00:11:54.051 fused_ordering(123) 00:11:54.051 fused_ordering(124) 00:11:54.051 fused_ordering(125) 00:11:54.051 fused_ordering(126) 00:11:54.051 fused_ordering(127) 00:11:54.051 fused_ordering(128) 00:11:54.051 fused_ordering(129) 00:11:54.051 fused_ordering(130) 00:11:54.051 fused_ordering(131) 00:11:54.051 fused_ordering(132) 00:11:54.051 fused_ordering(133) 00:11:54.051 fused_ordering(134) 00:11:54.051 fused_ordering(135) 00:11:54.051 fused_ordering(136) 00:11:54.051 fused_ordering(137) 00:11:54.051 fused_ordering(138) 00:11:54.051 fused_ordering(139) 00:11:54.051 fused_ordering(140) 00:11:54.051 fused_ordering(141) 00:11:54.051 fused_ordering(142) 00:11:54.051 fused_ordering(143) 00:11:54.051 fused_ordering(144) 00:11:54.051 fused_ordering(145) 00:11:54.051 fused_ordering(146) 00:11:54.051 fused_ordering(147) 00:11:54.051 fused_ordering(148) 00:11:54.051 fused_ordering(149) 00:11:54.051 fused_ordering(150) 00:11:54.051 fused_ordering(151) 00:11:54.051 fused_ordering(152) 00:11:54.051 fused_ordering(153) 00:11:54.051 fused_ordering(154) 00:11:54.051 fused_ordering(155) 00:11:54.051 fused_ordering(156) 00:11:54.052 fused_ordering(157) 00:11:54.052 fused_ordering(158) 00:11:54.052 fused_ordering(159) 00:11:54.052 fused_ordering(160) 00:11:54.052 fused_ordering(161) 00:11:54.052 fused_ordering(162) 00:11:54.052 fused_ordering(163) 00:11:54.052 fused_ordering(164) 00:11:54.052 fused_ordering(165) 00:11:54.052 fused_ordering(166) 00:11:54.052 fused_ordering(167) 00:11:54.052 fused_ordering(168) 00:11:54.052 fused_ordering(169) 00:11:54.052 fused_ordering(170) 00:11:54.052 fused_ordering(171) 00:11:54.052 fused_ordering(172) 00:11:54.052 fused_ordering(173) 00:11:54.052 fused_ordering(174) 00:11:54.052 fused_ordering(175) 00:11:54.052 fused_ordering(176) 00:11:54.052 fused_ordering(177) 00:11:54.052 fused_ordering(178) 00:11:54.052 fused_ordering(179) 00:11:54.052 fused_ordering(180) 00:11:54.052 fused_ordering(181) 00:11:54.052 fused_ordering(182) 00:11:54.052 fused_ordering(183) 00:11:54.052 fused_ordering(184) 00:11:54.052 fused_ordering(185) 00:11:54.052 fused_ordering(186) 00:11:54.052 fused_ordering(187) 00:11:54.052 fused_ordering(188) 00:11:54.052 fused_ordering(189) 00:11:54.052 fused_ordering(190) 00:11:54.052 fused_ordering(191) 00:11:54.052 fused_ordering(192) 00:11:54.052 fused_ordering(193) 00:11:54.052 fused_ordering(194) 00:11:54.052 fused_ordering(195) 00:11:54.052 fused_ordering(196) 00:11:54.052 fused_ordering(197) 00:11:54.052 fused_ordering(198) 00:11:54.052 fused_ordering(199) 00:11:54.052 fused_ordering(200) 00:11:54.052 fused_ordering(201) 00:11:54.052 fused_ordering(202) 00:11:54.052 fused_ordering(203) 00:11:54.052 fused_ordering(204) 00:11:54.052 fused_ordering(205) 00:11:54.310 fused_ordering(206) 00:11:54.310 fused_ordering(207) 00:11:54.310 fused_ordering(208) 00:11:54.310 fused_ordering(209) 00:11:54.310 fused_ordering(210) 00:11:54.310 fused_ordering(211) 00:11:54.310 fused_ordering(212) 00:11:54.310 fused_ordering(213) 00:11:54.310 fused_ordering(214) 00:11:54.310 fused_ordering(215) 00:11:54.310 fused_ordering(216) 00:11:54.310 fused_ordering(217) 00:11:54.310 fused_ordering(218) 00:11:54.310 fused_ordering(219) 00:11:54.310 fused_ordering(220) 00:11:54.310 fused_ordering(221) 00:11:54.310 fused_ordering(222) 00:11:54.310 fused_ordering(223) 00:11:54.310 fused_ordering(224) 00:11:54.310 fused_ordering(225) 00:11:54.310 fused_ordering(226) 00:11:54.310 fused_ordering(227) 00:11:54.310 fused_ordering(228) 00:11:54.310 fused_ordering(229) 00:11:54.310 fused_ordering(230) 00:11:54.310 fused_ordering(231) 00:11:54.310 fused_ordering(232) 00:11:54.310 fused_ordering(233) 00:11:54.310 fused_ordering(234) 00:11:54.310 fused_ordering(235) 00:11:54.310 fused_ordering(236) 00:11:54.310 fused_ordering(237) 00:11:54.310 fused_ordering(238) 00:11:54.310 fused_ordering(239) 00:11:54.310 fused_ordering(240) 00:11:54.310 fused_ordering(241) 00:11:54.310 fused_ordering(242) 00:11:54.310 fused_ordering(243) 00:11:54.310 fused_ordering(244) 00:11:54.310 fused_ordering(245) 00:11:54.310 fused_ordering(246) 00:11:54.310 fused_ordering(247) 00:11:54.310 fused_ordering(248) 00:11:54.310 fused_ordering(249) 00:11:54.310 fused_ordering(250) 00:11:54.310 fused_ordering(251) 00:11:54.310 fused_ordering(252) 00:11:54.310 fused_ordering(253) 00:11:54.310 fused_ordering(254) 00:11:54.310 fused_ordering(255) 00:11:54.310 fused_ordering(256) 00:11:54.310 fused_ordering(257) 00:11:54.310 fused_ordering(258) 00:11:54.310 fused_ordering(259) 00:11:54.310 fused_ordering(260) 00:11:54.310 fused_ordering(261) 00:11:54.310 fused_ordering(262) 00:11:54.310 fused_ordering(263) 00:11:54.310 fused_ordering(264) 00:11:54.310 fused_ordering(265) 00:11:54.310 fused_ordering(266) 00:11:54.310 fused_ordering(267) 00:11:54.310 fused_ordering(268) 00:11:54.310 fused_ordering(269) 00:11:54.310 fused_ordering(270) 00:11:54.310 fused_ordering(271) 00:11:54.310 fused_ordering(272) 00:11:54.310 fused_ordering(273) 00:11:54.310 fused_ordering(274) 00:11:54.310 fused_ordering(275) 00:11:54.310 fused_ordering(276) 00:11:54.310 fused_ordering(277) 00:11:54.310 fused_ordering(278) 00:11:54.310 fused_ordering(279) 00:11:54.310 fused_ordering(280) 00:11:54.310 fused_ordering(281) 00:11:54.310 fused_ordering(282) 00:11:54.310 fused_ordering(283) 00:11:54.310 fused_ordering(284) 00:11:54.310 fused_ordering(285) 00:11:54.310 fused_ordering(286) 00:11:54.310 fused_ordering(287) 00:11:54.310 fused_ordering(288) 00:11:54.310 fused_ordering(289) 00:11:54.310 fused_ordering(290) 00:11:54.310 fused_ordering(291) 00:11:54.310 fused_ordering(292) 00:11:54.310 fused_ordering(293) 00:11:54.310 fused_ordering(294) 00:11:54.310 fused_ordering(295) 00:11:54.310 fused_ordering(296) 00:11:54.310 fused_ordering(297) 00:11:54.310 fused_ordering(298) 00:11:54.310 fused_ordering(299) 00:11:54.310 fused_ordering(300) 00:11:54.310 fused_ordering(301) 00:11:54.310 fused_ordering(302) 00:11:54.310 fused_ordering(303) 00:11:54.310 fused_ordering(304) 00:11:54.310 fused_ordering(305) 00:11:54.310 fused_ordering(306) 00:11:54.310 fused_ordering(307) 00:11:54.310 fused_ordering(308) 00:11:54.310 fused_ordering(309) 00:11:54.310 fused_ordering(310) 00:11:54.310 fused_ordering(311) 00:11:54.310 fused_ordering(312) 00:11:54.310 fused_ordering(313) 00:11:54.310 fused_ordering(314) 00:11:54.310 fused_ordering(315) 00:11:54.310 fused_ordering(316) 00:11:54.310 fused_ordering(317) 00:11:54.310 fused_ordering(318) 00:11:54.310 fused_ordering(319) 00:11:54.310 fused_ordering(320) 00:11:54.310 fused_ordering(321) 00:11:54.310 fused_ordering(322) 00:11:54.310 fused_ordering(323) 00:11:54.310 fused_ordering(324) 00:11:54.310 fused_ordering(325) 00:11:54.310 fused_ordering(326) 00:11:54.310 fused_ordering(327) 00:11:54.310 fused_ordering(328) 00:11:54.310 fused_ordering(329) 00:11:54.310 fused_ordering(330) 00:11:54.310 fused_ordering(331) 00:11:54.310 fused_ordering(332) 00:11:54.310 fused_ordering(333) 00:11:54.310 fused_ordering(334) 00:11:54.310 fused_ordering(335) 00:11:54.310 fused_ordering(336) 00:11:54.310 fused_ordering(337) 00:11:54.310 fused_ordering(338) 00:11:54.310 fused_ordering(339) 00:11:54.310 fused_ordering(340) 00:11:54.310 fused_ordering(341) 00:11:54.310 fused_ordering(342) 00:11:54.310 fused_ordering(343) 00:11:54.310 fused_ordering(344) 00:11:54.310 fused_ordering(345) 00:11:54.310 fused_ordering(346) 00:11:54.310 fused_ordering(347) 00:11:54.310 fused_ordering(348) 00:11:54.310 fused_ordering(349) 00:11:54.310 fused_ordering(350) 00:11:54.310 fused_ordering(351) 00:11:54.310 fused_ordering(352) 00:11:54.310 fused_ordering(353) 00:11:54.310 fused_ordering(354) 00:11:54.310 fused_ordering(355) 00:11:54.310 fused_ordering(356) 00:11:54.310 fused_ordering(357) 00:11:54.310 fused_ordering(358) 00:11:54.310 fused_ordering(359) 00:11:54.310 fused_ordering(360) 00:11:54.310 fused_ordering(361) 00:11:54.310 fused_ordering(362) 00:11:54.310 fused_ordering(363) 00:11:54.310 fused_ordering(364) 00:11:54.310 fused_ordering(365) 00:11:54.310 fused_ordering(366) 00:11:54.310 fused_ordering(367) 00:11:54.310 fused_ordering(368) 00:11:54.310 fused_ordering(369) 00:11:54.310 fused_ordering(370) 00:11:54.310 fused_ordering(371) 00:11:54.310 fused_ordering(372) 00:11:54.310 fused_ordering(373) 00:11:54.310 fused_ordering(374) 00:11:54.310 fused_ordering(375) 00:11:54.310 fused_ordering(376) 00:11:54.310 fused_ordering(377) 00:11:54.310 fused_ordering(378) 00:11:54.310 fused_ordering(379) 00:11:54.310 fused_ordering(380) 00:11:54.310 fused_ordering(381) 00:11:54.310 fused_ordering(382) 00:11:54.310 fused_ordering(383) 00:11:54.310 fused_ordering(384) 00:11:54.310 fused_ordering(385) 00:11:54.310 fused_ordering(386) 00:11:54.310 fused_ordering(387) 00:11:54.310 fused_ordering(388) 00:11:54.310 fused_ordering(389) 00:11:54.310 fused_ordering(390) 00:11:54.310 fused_ordering(391) 00:11:54.310 fused_ordering(392) 00:11:54.310 fused_ordering(393) 00:11:54.310 fused_ordering(394) 00:11:54.310 fused_ordering(395) 00:11:54.310 fused_ordering(396) 00:11:54.311 fused_ordering(397) 00:11:54.311 fused_ordering(398) 00:11:54.311 fused_ordering(399) 00:11:54.311 fused_ordering(400) 00:11:54.311 fused_ordering(401) 00:11:54.311 fused_ordering(402) 00:11:54.311 fused_ordering(403) 00:11:54.311 fused_ordering(404) 00:11:54.311 fused_ordering(405) 00:11:54.311 fused_ordering(406) 00:11:54.311 fused_ordering(407) 00:11:54.311 fused_ordering(408) 00:11:54.311 fused_ordering(409) 00:11:54.311 fused_ordering(410) 00:11:54.569 fused_ordering(411) 00:11:54.569 fused_ordering(412) 00:11:54.569 fused_ordering(413) 00:11:54.569 fused_ordering(414) 00:11:54.569 fused_ordering(415) 00:11:54.569 fused_ordering(416) 00:11:54.569 fused_ordering(417) 00:11:54.569 fused_ordering(418) 00:11:54.569 fused_ordering(419) 00:11:54.569 fused_ordering(420) 00:11:54.569 fused_ordering(421) 00:11:54.569 fused_ordering(422) 00:11:54.569 fused_ordering(423) 00:11:54.569 fused_ordering(424) 00:11:54.569 fused_ordering(425) 00:11:54.569 fused_ordering(426) 00:11:54.569 fused_ordering(427) 00:11:54.569 fused_ordering(428) 00:11:54.569 fused_ordering(429) 00:11:54.569 fused_ordering(430) 00:11:54.569 fused_ordering(431) 00:11:54.569 fused_ordering(432) 00:11:54.569 fused_ordering(433) 00:11:54.569 fused_ordering(434) 00:11:54.569 fused_ordering(435) 00:11:54.569 fused_ordering(436) 00:11:54.569 fused_ordering(437) 00:11:54.569 fused_ordering(438) 00:11:54.569 fused_ordering(439) 00:11:54.569 fused_ordering(440) 00:11:54.569 fused_ordering(441) 00:11:54.569 fused_ordering(442) 00:11:54.569 fused_ordering(443) 00:11:54.569 fused_ordering(444) 00:11:54.569 fused_ordering(445) 00:11:54.569 fused_ordering(446) 00:11:54.569 fused_ordering(447) 00:11:54.569 fused_ordering(448) 00:11:54.569 fused_ordering(449) 00:11:54.569 fused_ordering(450) 00:11:54.569 fused_ordering(451) 00:11:54.569 fused_ordering(452) 00:11:54.569 fused_ordering(453) 00:11:54.569 fused_ordering(454) 00:11:54.569 fused_ordering(455) 00:11:54.569 fused_ordering(456) 00:11:54.569 fused_ordering(457) 00:11:54.569 fused_ordering(458) 00:11:54.569 fused_ordering(459) 00:11:54.569 fused_ordering(460) 00:11:54.569 fused_ordering(461) 00:11:54.569 fused_ordering(462) 00:11:54.569 fused_ordering(463) 00:11:54.569 fused_ordering(464) 00:11:54.569 fused_ordering(465) 00:11:54.569 fused_ordering(466) 00:11:54.569 fused_ordering(467) 00:11:54.569 fused_ordering(468) 00:11:54.569 fused_ordering(469) 00:11:54.569 fused_ordering(470) 00:11:54.569 fused_ordering(471) 00:11:54.569 fused_ordering(472) 00:11:54.569 fused_ordering(473) 00:11:54.569 fused_ordering(474) 00:11:54.569 fused_ordering(475) 00:11:54.569 fused_ordering(476) 00:11:54.569 fused_ordering(477) 00:11:54.569 fused_ordering(478) 00:11:54.569 fused_ordering(479) 00:11:54.569 fused_ordering(480) 00:11:54.569 fused_ordering(481) 00:11:54.569 fused_ordering(482) 00:11:54.569 fused_ordering(483) 00:11:54.569 fused_ordering(484) 00:11:54.569 fused_ordering(485) 00:11:54.569 fused_ordering(486) 00:11:54.569 fused_ordering(487) 00:11:54.569 fused_ordering(488) 00:11:54.569 fused_ordering(489) 00:11:54.569 fused_ordering(490) 00:11:54.569 fused_ordering(491) 00:11:54.569 fused_ordering(492) 00:11:54.569 fused_ordering(493) 00:11:54.569 fused_ordering(494) 00:11:54.569 fused_ordering(495) 00:11:54.569 fused_ordering(496) 00:11:54.569 fused_ordering(497) 00:11:54.569 fused_ordering(498) 00:11:54.569 fused_ordering(499) 00:11:54.569 fused_ordering(500) 00:11:54.569 fused_ordering(501) 00:11:54.569 fused_ordering(502) 00:11:54.569 fused_ordering(503) 00:11:54.569 fused_ordering(504) 00:11:54.569 fused_ordering(505) 00:11:54.569 fused_ordering(506) 00:11:54.569 fused_ordering(507) 00:11:54.569 fused_ordering(508) 00:11:54.569 fused_ordering(509) 00:11:54.569 fused_ordering(510) 00:11:54.569 fused_ordering(511) 00:11:54.569 fused_ordering(512) 00:11:54.569 fused_ordering(513) 00:11:54.569 fused_ordering(514) 00:11:54.569 fused_ordering(515) 00:11:54.569 fused_ordering(516) 00:11:54.569 fused_ordering(517) 00:11:54.569 fused_ordering(518) 00:11:54.569 fused_ordering(519) 00:11:54.569 fused_ordering(520) 00:11:54.569 fused_ordering(521) 00:11:54.569 fused_ordering(522) 00:11:54.569 fused_ordering(523) 00:11:54.569 fused_ordering(524) 00:11:54.569 fused_ordering(525) 00:11:54.569 fused_ordering(526) 00:11:54.569 fused_ordering(527) 00:11:54.569 fused_ordering(528) 00:11:54.569 fused_ordering(529) 00:11:54.569 fused_ordering(530) 00:11:54.569 fused_ordering(531) 00:11:54.569 fused_ordering(532) 00:11:54.569 fused_ordering(533) 00:11:54.569 fused_ordering(534) 00:11:54.569 fused_ordering(535) 00:11:54.569 fused_ordering(536) 00:11:54.569 fused_ordering(537) 00:11:54.569 fused_ordering(538) 00:11:54.569 fused_ordering(539) 00:11:54.569 fused_ordering(540) 00:11:54.569 fused_ordering(541) 00:11:54.569 fused_ordering(542) 00:11:54.569 fused_ordering(543) 00:11:54.569 fused_ordering(544) 00:11:54.569 fused_ordering(545) 00:11:54.569 fused_ordering(546) 00:11:54.569 fused_ordering(547) 00:11:54.569 fused_ordering(548) 00:11:54.569 fused_ordering(549) 00:11:54.569 fused_ordering(550) 00:11:54.569 fused_ordering(551) 00:11:54.569 fused_ordering(552) 00:11:54.569 fused_ordering(553) 00:11:54.569 fused_ordering(554) 00:11:54.569 fused_ordering(555) 00:11:54.569 fused_ordering(556) 00:11:54.569 fused_ordering(557) 00:11:54.569 fused_ordering(558) 00:11:54.569 fused_ordering(559) 00:11:54.569 fused_ordering(560) 00:11:54.569 fused_ordering(561) 00:11:54.569 fused_ordering(562) 00:11:54.569 fused_ordering(563) 00:11:54.569 fused_ordering(564) 00:11:54.569 fused_ordering(565) 00:11:54.569 fused_ordering(566) 00:11:54.569 fused_ordering(567) 00:11:54.569 fused_ordering(568) 00:11:54.569 fused_ordering(569) 00:11:54.569 fused_ordering(570) 00:11:54.569 fused_ordering(571) 00:11:54.569 fused_ordering(572) 00:11:54.569 fused_ordering(573) 00:11:54.569 fused_ordering(574) 00:11:54.569 fused_ordering(575) 00:11:54.569 fused_ordering(576) 00:11:54.569 fused_ordering(577) 00:11:54.569 fused_ordering(578) 00:11:54.569 fused_ordering(579) 00:11:54.569 fused_ordering(580) 00:11:54.569 fused_ordering(581) 00:11:54.569 fused_ordering(582) 00:11:54.569 fused_ordering(583) 00:11:54.569 fused_ordering(584) 00:11:54.569 fused_ordering(585) 00:11:54.569 fused_ordering(586) 00:11:54.569 fused_ordering(587) 00:11:54.569 fused_ordering(588) 00:11:54.569 fused_ordering(589) 00:11:54.569 fused_ordering(590) 00:11:54.569 fused_ordering(591) 00:11:54.569 fused_ordering(592) 00:11:54.569 fused_ordering(593) 00:11:54.569 fused_ordering(594) 00:11:54.569 fused_ordering(595) 00:11:54.569 fused_ordering(596) 00:11:54.569 fused_ordering(597) 00:11:54.569 fused_ordering(598) 00:11:54.569 fused_ordering(599) 00:11:54.569 fused_ordering(600) 00:11:54.569 fused_ordering(601) 00:11:54.569 fused_ordering(602) 00:11:54.569 fused_ordering(603) 00:11:54.569 fused_ordering(604) 00:11:54.569 fused_ordering(605) 00:11:54.569 fused_ordering(606) 00:11:54.569 fused_ordering(607) 00:11:54.569 fused_ordering(608) 00:11:54.569 fused_ordering(609) 00:11:54.569 fused_ordering(610) 00:11:54.569 fused_ordering(611) 00:11:54.569 fused_ordering(612) 00:11:54.569 fused_ordering(613) 00:11:54.569 fused_ordering(614) 00:11:54.569 fused_ordering(615) 00:11:55.135 fused_ordering(616) 00:11:55.135 fused_ordering(617) 00:11:55.135 fused_ordering(618) 00:11:55.135 fused_ordering(619) 00:11:55.135 fused_ordering(620) 00:11:55.135 fused_ordering(621) 00:11:55.135 fused_ordering(622) 00:11:55.135 fused_ordering(623) 00:11:55.135 fused_ordering(624) 00:11:55.135 fused_ordering(625) 00:11:55.135 fused_ordering(626) 00:11:55.135 fused_ordering(627) 00:11:55.135 fused_ordering(628) 00:11:55.135 fused_ordering(629) 00:11:55.135 fused_ordering(630) 00:11:55.135 fused_ordering(631) 00:11:55.135 fused_ordering(632) 00:11:55.135 fused_ordering(633) 00:11:55.135 fused_ordering(634) 00:11:55.135 fused_ordering(635) 00:11:55.135 fused_ordering(636) 00:11:55.135 fused_ordering(637) 00:11:55.135 fused_ordering(638) 00:11:55.135 fused_ordering(639) 00:11:55.135 fused_ordering(640) 00:11:55.135 fused_ordering(641) 00:11:55.135 fused_ordering(642) 00:11:55.135 fused_ordering(643) 00:11:55.135 fused_ordering(644) 00:11:55.135 fused_ordering(645) 00:11:55.135 fused_ordering(646) 00:11:55.135 fused_ordering(647) 00:11:55.135 fused_ordering(648) 00:11:55.135 fused_ordering(649) 00:11:55.135 fused_ordering(650) 00:11:55.135 fused_ordering(651) 00:11:55.135 fused_ordering(652) 00:11:55.135 fused_ordering(653) 00:11:55.135 fused_ordering(654) 00:11:55.135 fused_ordering(655) 00:11:55.135 fused_ordering(656) 00:11:55.135 fused_ordering(657) 00:11:55.135 fused_ordering(658) 00:11:55.135 fused_ordering(659) 00:11:55.135 fused_ordering(660) 00:11:55.135 fused_ordering(661) 00:11:55.135 fused_ordering(662) 00:11:55.135 fused_ordering(663) 00:11:55.135 fused_ordering(664) 00:11:55.135 fused_ordering(665) 00:11:55.135 fused_ordering(666) 00:11:55.135 fused_ordering(667) 00:11:55.135 fused_ordering(668) 00:11:55.135 fused_ordering(669) 00:11:55.135 fused_ordering(670) 00:11:55.135 fused_ordering(671) 00:11:55.135 fused_ordering(672) 00:11:55.135 fused_ordering(673) 00:11:55.135 fused_ordering(674) 00:11:55.135 fused_ordering(675) 00:11:55.135 fused_ordering(676) 00:11:55.135 fused_ordering(677) 00:11:55.135 fused_ordering(678) 00:11:55.135 fused_ordering(679) 00:11:55.135 fused_ordering(680) 00:11:55.135 fused_ordering(681) 00:11:55.135 fused_ordering(682) 00:11:55.135 fused_ordering(683) 00:11:55.135 fused_ordering(684) 00:11:55.135 fused_ordering(685) 00:11:55.135 fused_ordering(686) 00:11:55.135 fused_ordering(687) 00:11:55.135 fused_ordering(688) 00:11:55.135 fused_ordering(689) 00:11:55.135 fused_ordering(690) 00:11:55.135 fused_ordering(691) 00:11:55.135 fused_ordering(692) 00:11:55.135 fused_ordering(693) 00:11:55.135 fused_ordering(694) 00:11:55.135 fused_ordering(695) 00:11:55.135 fused_ordering(696) 00:11:55.135 fused_ordering(697) 00:11:55.135 fused_ordering(698) 00:11:55.135 fused_ordering(699) 00:11:55.135 fused_ordering(700) 00:11:55.135 fused_ordering(701) 00:11:55.135 fused_ordering(702) 00:11:55.135 fused_ordering(703) 00:11:55.135 fused_ordering(704) 00:11:55.135 fused_ordering(705) 00:11:55.135 fused_ordering(706) 00:11:55.135 fused_ordering(707) 00:11:55.135 fused_ordering(708) 00:11:55.135 fused_ordering(709) 00:11:55.135 fused_ordering(710) 00:11:55.135 fused_ordering(711) 00:11:55.135 fused_ordering(712) 00:11:55.135 fused_ordering(713) 00:11:55.135 fused_ordering(714) 00:11:55.135 fused_ordering(715) 00:11:55.135 fused_ordering(716) 00:11:55.135 fused_ordering(717) 00:11:55.135 fused_ordering(718) 00:11:55.135 fused_ordering(719) 00:11:55.135 fused_ordering(720) 00:11:55.135 fused_ordering(721) 00:11:55.135 fused_ordering(722) 00:11:55.135 fused_ordering(723) 00:11:55.135 fused_ordering(724) 00:11:55.135 fused_ordering(725) 00:11:55.135 fused_ordering(726) 00:11:55.135 fused_ordering(727) 00:11:55.135 fused_ordering(728) 00:11:55.135 fused_ordering(729) 00:11:55.135 fused_ordering(730) 00:11:55.135 fused_ordering(731) 00:11:55.135 fused_ordering(732) 00:11:55.135 fused_ordering(733) 00:11:55.135 fused_ordering(734) 00:11:55.135 fused_ordering(735) 00:11:55.135 fused_ordering(736) 00:11:55.135 fused_ordering(737) 00:11:55.135 fused_ordering(738) 00:11:55.135 fused_ordering(739) 00:11:55.135 fused_ordering(740) 00:11:55.135 fused_ordering(741) 00:11:55.135 fused_ordering(742) 00:11:55.135 fused_ordering(743) 00:11:55.135 fused_ordering(744) 00:11:55.135 fused_ordering(745) 00:11:55.135 fused_ordering(746) 00:11:55.135 fused_ordering(747) 00:11:55.135 fused_ordering(748) 00:11:55.135 fused_ordering(749) 00:11:55.135 fused_ordering(750) 00:11:55.135 fused_ordering(751) 00:11:55.135 fused_ordering(752) 00:11:55.135 fused_ordering(753) 00:11:55.135 fused_ordering(754) 00:11:55.135 fused_ordering(755) 00:11:55.135 fused_ordering(756) 00:11:55.135 fused_ordering(757) 00:11:55.135 fused_ordering(758) 00:11:55.135 fused_ordering(759) 00:11:55.135 fused_ordering(760) 00:11:55.135 fused_ordering(761) 00:11:55.135 fused_ordering(762) 00:11:55.135 fused_ordering(763) 00:11:55.135 fused_ordering(764) 00:11:55.135 fused_ordering(765) 00:11:55.135 fused_ordering(766) 00:11:55.135 fused_ordering(767) 00:11:55.135 fused_ordering(768) 00:11:55.135 fused_ordering(769) 00:11:55.135 fused_ordering(770) 00:11:55.135 fused_ordering(771) 00:11:55.135 fused_ordering(772) 00:11:55.135 fused_ordering(773) 00:11:55.135 fused_ordering(774) 00:11:55.135 fused_ordering(775) 00:11:55.135 fused_ordering(776) 00:11:55.135 fused_ordering(777) 00:11:55.135 fused_ordering(778) 00:11:55.135 fused_ordering(779) 00:11:55.135 fused_ordering(780) 00:11:55.135 fused_ordering(781) 00:11:55.135 fused_ordering(782) 00:11:55.135 fused_ordering(783) 00:11:55.135 fused_ordering(784) 00:11:55.135 fused_ordering(785) 00:11:55.135 fused_ordering(786) 00:11:55.135 fused_ordering(787) 00:11:55.135 fused_ordering(788) 00:11:55.135 fused_ordering(789) 00:11:55.135 fused_ordering(790) 00:11:55.135 fused_ordering(791) 00:11:55.135 fused_ordering(792) 00:11:55.135 fused_ordering(793) 00:11:55.135 fused_ordering(794) 00:11:55.135 fused_ordering(795) 00:11:55.135 fused_ordering(796) 00:11:55.135 fused_ordering(797) 00:11:55.135 fused_ordering(798) 00:11:55.135 fused_ordering(799) 00:11:55.135 fused_ordering(800) 00:11:55.135 fused_ordering(801) 00:11:55.135 fused_ordering(802) 00:11:55.135 fused_ordering(803) 00:11:55.135 fused_ordering(804) 00:11:55.135 fused_ordering(805) 00:11:55.135 fused_ordering(806) 00:11:55.135 fused_ordering(807) 00:11:55.135 fused_ordering(808) 00:11:55.135 fused_ordering(809) 00:11:55.135 fused_ordering(810) 00:11:55.135 fused_ordering(811) 00:11:55.135 fused_ordering(812) 00:11:55.135 fused_ordering(813) 00:11:55.135 fused_ordering(814) 00:11:55.135 fused_ordering(815) 00:11:55.135 fused_ordering(816) 00:11:55.135 fused_ordering(817) 00:11:55.135 fused_ordering(818) 00:11:55.135 fused_ordering(819) 00:11:55.135 fused_ordering(820) 00:11:55.393 fused_o[2024-11-04 16:23:22.184864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe21f0 is same with the state(6) to be set 00:11:55.393 rdering(821) 00:11:55.393 fused_ordering(822) 00:11:55.393 fused_ordering(823) 00:11:55.393 fused_ordering(824) 00:11:55.393 fused_ordering(825) 00:11:55.393 fused_ordering(826) 00:11:55.393 fused_ordering(827) 00:11:55.393 fused_ordering(828) 00:11:55.394 fused_ordering(829) 00:11:55.394 fused_ordering(830) 00:11:55.394 fused_ordering(831) 00:11:55.394 fused_ordering(832) 00:11:55.394 fused_ordering(833) 00:11:55.394 fused_ordering(834) 00:11:55.394 fused_ordering(835) 00:11:55.394 fused_ordering(836) 00:11:55.394 fused_ordering(837) 00:11:55.394 fused_ordering(838) 00:11:55.394 fused_ordering(839) 00:11:55.394 fused_ordering(840) 00:11:55.394 fused_ordering(841) 00:11:55.394 fused_ordering(842) 00:11:55.394 fused_ordering(843) 00:11:55.394 fused_ordering(844) 00:11:55.394 fused_ordering(845) 00:11:55.394 fused_ordering(846) 00:11:55.394 fused_ordering(847) 00:11:55.394 fused_ordering(848) 00:11:55.394 fused_ordering(849) 00:11:55.394 fused_ordering(850) 00:11:55.394 fused_ordering(851) 00:11:55.394 fused_ordering(852) 00:11:55.394 fused_ordering(853) 00:11:55.394 fused_ordering(854) 00:11:55.394 fused_ordering(855) 00:11:55.394 fused_ordering(856) 00:11:55.394 fused_ordering(857) 00:11:55.394 fused_ordering(858) 00:11:55.394 fused_ordering(859) 00:11:55.394 fused_ordering(860) 00:11:55.394 fused_ordering(861) 00:11:55.394 fused_ordering(862) 00:11:55.394 fused_ordering(863) 00:11:55.394 fused_ordering(864) 00:11:55.394 fused_ordering(865) 00:11:55.394 fused_ordering(866) 00:11:55.394 fused_ordering(867) 00:11:55.394 fused_ordering(868) 00:11:55.394 fused_ordering(869) 00:11:55.394 fused_ordering(870) 00:11:55.394 fused_ordering(871) 00:11:55.394 fused_ordering(872) 00:11:55.394 fused_ordering(873) 00:11:55.394 fused_ordering(874) 00:11:55.394 fused_ordering(875) 00:11:55.394 fused_ordering(876) 00:11:55.394 fused_ordering(877) 00:11:55.394 fused_ordering(878) 00:11:55.394 fused_ordering(879) 00:11:55.394 fused_ordering(880) 00:11:55.394 fused_ordering(881) 00:11:55.394 fused_ordering(882) 00:11:55.394 fused_ordering(883) 00:11:55.394 fused_ordering(884) 00:11:55.394 fused_ordering(885) 00:11:55.394 fused_ordering(886) 00:11:55.394 fused_ordering(887) 00:11:55.394 fused_ordering(888) 00:11:55.394 fused_ordering(889) 00:11:55.394 fused_ordering(890) 00:11:55.394 fused_ordering(891) 00:11:55.394 fused_ordering(892) 00:11:55.394 fused_ordering(893) 00:11:55.394 fused_ordering(894) 00:11:55.394 fused_ordering(895) 00:11:55.394 fused_ordering(896) 00:11:55.394 fused_ordering(897) 00:11:55.394 fused_ordering(898) 00:11:55.394 fused_ordering(899) 00:11:55.394 fused_ordering(900) 00:11:55.394 fused_ordering(901) 00:11:55.394 fused_ordering(902) 00:11:55.394 fused_ordering(903) 00:11:55.394 fused_ordering(904) 00:11:55.394 fused_ordering(905) 00:11:55.394 fused_ordering(906) 00:11:55.394 fused_ordering(907) 00:11:55.394 fused_ordering(908) 00:11:55.394 fused_ordering(909) 00:11:55.394 fused_ordering(910) 00:11:55.394 fused_ordering(911) 00:11:55.394 fused_ordering(912) 00:11:55.394 fused_ordering(913) 00:11:55.394 fused_ordering(914) 00:11:55.394 fused_ordering(915) 00:11:55.394 fused_ordering(916) 00:11:55.394 fused_ordering(917) 00:11:55.394 fused_ordering(918) 00:11:55.394 fused_ordering(919) 00:11:55.394 fused_ordering(920) 00:11:55.394 fused_ordering(921) 00:11:55.394 fused_ordering(922) 00:11:55.394 fused_ordering(923) 00:11:55.394 fused_ordering(924) 00:11:55.394 fused_ordering(925) 00:11:55.394 fused_ordering(926) 00:11:55.394 fused_ordering(927) 00:11:55.394 fused_ordering(928) 00:11:55.394 fused_ordering(929) 00:11:55.394 fused_ordering(930) 00:11:55.394 fused_ordering(931) 00:11:55.394 fused_ordering(932) 00:11:55.394 fused_ordering(933) 00:11:55.394 fused_ordering(934) 00:11:55.394 fused_ordering(935) 00:11:55.394 fused_ordering(936) 00:11:55.394 fused_ordering(937) 00:11:55.394 fused_ordering(938) 00:11:55.394 fused_ordering(939) 00:11:55.394 fused_ordering(940) 00:11:55.394 fused_ordering(941) 00:11:55.394 fused_ordering(942) 00:11:55.394 fused_ordering(943) 00:11:55.394 fused_ordering(944) 00:11:55.394 fused_ordering(945) 00:11:55.394 fused_ordering(946) 00:11:55.394 fused_ordering(947) 00:11:55.394 fused_ordering(948) 00:11:55.394 fused_ordering(949) 00:11:55.394 fused_ordering(950) 00:11:55.394 fused_ordering(951) 00:11:55.394 fused_ordering(952) 00:11:55.394 fused_ordering(953) 00:11:55.394 fused_ordering(954) 00:11:55.394 fused_ordering(955) 00:11:55.394 fused_ordering(956) 00:11:55.394 fused_ordering(957) 00:11:55.394 fused_ordering(958) 00:11:55.394 fused_ordering(959) 00:11:55.394 fused_ordering(960) 00:11:55.394 fused_ordering(961) 00:11:55.394 fused_ordering(962) 00:11:55.394 fused_ordering(963) 00:11:55.394 fused_ordering(964) 00:11:55.394 fused_ordering(965) 00:11:55.394 fused_ordering(966) 00:11:55.394 fused_ordering(967) 00:11:55.394 fused_ordering(968) 00:11:55.394 fused_ordering(969) 00:11:55.394 fused_ordering(970) 00:11:55.394 fused_ordering(971) 00:11:55.394 fused_ordering(972) 00:11:55.394 fused_ordering(973) 00:11:55.394 fused_ordering(974) 00:11:55.394 fused_ordering(975) 00:11:55.394 fused_ordering(976) 00:11:55.394 fused_ordering(977) 00:11:55.394 fused_ordering(978) 00:11:55.394 fused_ordering(979) 00:11:55.394 fused_ordering(980) 00:11:55.394 fused_ordering(981) 00:11:55.394 fused_ordering(982) 00:11:55.394 fused_ordering(983) 00:11:55.394 fused_ordering(984) 00:11:55.394 fused_ordering(985) 00:11:55.394 fused_ordering(986) 00:11:55.394 fused_ordering(987) 00:11:55.394 fused_ordering(988) 00:11:55.394 fused_ordering(989) 00:11:55.394 fused_ordering(990) 00:11:55.394 fused_ordering(991) 00:11:55.394 fused_ordering(992) 00:11:55.394 fused_ordering(993) 00:11:55.394 fused_ordering(994) 00:11:55.394 fused_ordering(995) 00:11:55.394 fused_ordering(996) 00:11:55.394 fused_ordering(997) 00:11:55.394 fused_ordering(998) 00:11:55.394 fused_ordering(999) 00:11:55.394 fused_ordering(1000) 00:11:55.394 fused_ordering(1001) 00:11:55.394 fused_ordering(1002) 00:11:55.394 fused_ordering(1003) 00:11:55.394 fused_ordering(1004) 00:11:55.394 fused_ordering(1005) 00:11:55.394 fused_ordering(1006) 00:11:55.394 fused_ordering(1007) 00:11:55.394 fused_ordering(1008) 00:11:55.394 fused_ordering(1009) 00:11:55.394 fused_ordering(1010) 00:11:55.394 fused_ordering(1011) 00:11:55.394 fused_ordering(1012) 00:11:55.394 fused_ordering(1013) 00:11:55.394 fused_ordering(1014) 00:11:55.394 fused_ordering(1015) 00:11:55.394 fused_ordering(1016) 00:11:55.394 fused_ordering(1017) 00:11:55.394 fused_ordering(1018) 00:11:55.394 fused_ordering(1019) 00:11:55.394 fused_ordering(1020) 00:11:55.394 fused_ordering(1021) 00:11:55.394 fused_ordering(1022) 00:11:55.394 fused_ordering(1023) 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.394 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.394 rmmod nvme_tcp 00:11:55.653 rmmod nvme_fabrics 00:11:55.653 rmmod nvme_keyring 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2762972 ']' 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2762972 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2762972 ']' 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2762972 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762972 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762972' 00:11:55.653 killing process with pid 2762972 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2762972 00:11:55.653 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2762972 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.911 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.814 00:11:57.814 real 0m9.622s 00:11:57.814 user 0m4.443s 00:11:57.814 sys 0m5.085s 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:57.814 ************************************ 00:11:57.814 END TEST nvmf_fused_ordering 00:11:57.814 ************************************ 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.814 ************************************ 00:11:57.814 START TEST nvmf_ns_masking 00:11:57.814 ************************************ 00:11:57.814 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:58.073 * Looking for test storage... 00:11:58.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.073 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.074 --rc genhtml_branch_coverage=1 00:11:58.074 --rc genhtml_function_coverage=1 00:11:58.074 --rc genhtml_legend=1 00:11:58.074 --rc geninfo_all_blocks=1 00:11:58.074 --rc geninfo_unexecuted_blocks=1 00:11:58.074 00:11:58.074 ' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.074 --rc genhtml_branch_coverage=1 00:11:58.074 --rc genhtml_function_coverage=1 00:11:58.074 --rc genhtml_legend=1 00:11:58.074 --rc geninfo_all_blocks=1 00:11:58.074 --rc geninfo_unexecuted_blocks=1 00:11:58.074 00:11:58.074 ' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.074 --rc genhtml_branch_coverage=1 00:11:58.074 --rc genhtml_function_coverage=1 00:11:58.074 --rc genhtml_legend=1 00:11:58.074 --rc geninfo_all_blocks=1 00:11:58.074 --rc geninfo_unexecuted_blocks=1 00:11:58.074 00:11:58.074 ' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.074 --rc genhtml_branch_coverage=1 00:11:58.074 --rc genhtml_function_coverage=1 00:11:58.074 --rc genhtml_legend=1 00:11:58.074 --rc geninfo_all_blocks=1 00:11:58.074 --rc geninfo_unexecuted_blocks=1 00:11:58.074 00:11:58.074 ' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6796784e-2430-4360-a39e-c9209077f66f 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ee04ce63-e18b-41ea-b67a-a3d5d3c20f5d 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bad0df2e-60df-410f-9e56-44bf8f6edd7c 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.074 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.075 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:58.075 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:58.075 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.075 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:03.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:03.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.429 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:03.430 Found net devices under 0000:86:00.0: cvl_0_0 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:03.430 Found net devices under 0000:86:00.1: cvl_0_1 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.430 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:12:03.688 00:12:03.688 --- 10.0.0.2 ping statistics --- 00:12:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.688 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:12:03.688 00:12:03.688 --- 10.0.0.1 ping statistics --- 00:12:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.688 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.688 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2766781 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2766781 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2766781 ']' 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.689 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.947 [2024-11-04 16:23:30.516517] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:12:03.947 [2024-11-04 16:23:30.516568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.947 [2024-11-04 16:23:30.587496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.947 [2024-11-04 16:23:30.629477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.947 [2024-11-04 16:23:30.629516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.947 [2024-11-04 16:23:30.629524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.947 [2024-11-04 16:23:30.629530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.947 [2024-11-04 16:23:30.629535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.947 [2024-11-04 16:23:30.630098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.947 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:04.205 [2024-11-04 16:23:30.934576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.205 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:04.205 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:04.205 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:04.463 Malloc1 00:12:04.463 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:04.721 Malloc2 00:12:04.721 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.979 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:04.979 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.236 [2024-11-04 16:23:31.891680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.236 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:05.237 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bad0df2e-60df-410f-9e56-44bf8f6edd7c -a 10.0.0.2 -s 4420 -i 4 00:12:05.494 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.494 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:05.494 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.494 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:05.494 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.389 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.647 [ 0]:0x1 00:12:07.647 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.647 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.647 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12344325823f4304b7868d66a712b926 00:12:07.647 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12344325823f4304b7868d66a712b926 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.647 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.905 [ 0]:0x1 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12344325823f4304b7868d66a712b926 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12344325823f4304b7868d66a712b926 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.905 [ 1]:0x2 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.905 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.163 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:08.420 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:08.420 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bad0df2e-60df-410f-9e56-44bf8f6edd7c -a 10.0.0.2 -s 4420 -i 4 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:08.678 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.577 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:10.836 [ 0]:0x2 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:10.836 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.093 [ 0]:0x1 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12344325823f4304b7868d66a712b926 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12344325823f4304b7868d66a712b926 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.093 [ 1]:0x2 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.093 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.351 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.351 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:11.351 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.351 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:11.351 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.352 [ 0]:0x2 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.352 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.610 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:11.610 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bad0df2e-60df-410f-9e56-44bf8f6edd7c -a 10.0.0.2 -s 4420 -i 4 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:11.868 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.767 [ 0]:0x1 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.767 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12344325823f4304b7868d66a712b926 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12344325823f4304b7868d66a712b926 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.025 [ 1]:0x2 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.025 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.283 [ 0]:0x2 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.283 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:14.284 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:14.542 [2024-11-04 16:23:41.134775] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:14.542 request: 00:12:14.542 { 00:12:14.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.542 "nsid": 2, 00:12:14.542 "host": "nqn.2016-06.io.spdk:host1", 00:12:14.542 "method": "nvmf_ns_remove_host", 00:12:14.542 "req_id": 1 00:12:14.542 } 00:12:14.542 Got JSON-RPC error response 00:12:14.542 response: 00:12:14.542 { 00:12:14.542 "code": -32602, 00:12:14.542 "message": "Invalid parameters" 00:12:14.542 } 00:12:14.542 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:14.542 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.543 [ 0]:0x2 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37298c67609c486b8c2b814441b7cfd6 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37298c67609c486b8c2b814441b7cfd6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2768765 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2768765 /var/tmp/host.sock 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2768765 ']' 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:14.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.543 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:14.543 [2024-11-04 16:23:41.357712] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:12:14.543 [2024-11-04 16:23:41.357759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768765 ] 00:12:14.801 [2024-11-04 16:23:41.420986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.801 [2024-11-04 16:23:41.461293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.060 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.060 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:15.060 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.060 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:15.318 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6796784e-2430-4360-a39e-c9209077f66f 00:12:15.318 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:15.318 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6796784E24304360A39EC9209077F66F -i 00:12:15.577 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ee04ce63-e18b-41ea-b67a-a3d5d3c20f5d 00:12:15.577 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:15.577 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EE04CE63E18B41EAB67AA3D5D3C20F5D -i 00:12:15.834 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.834 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:16.092 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:16.092 16:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:16.351 nvme0n1 00:12:16.351 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:16.351 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:16.608 nvme1n2 00:12:16.608 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:16.609 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:16.609 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:16.609 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:16.609 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:16.866 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:16.866 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:16.866 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:16.866 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6796784e-2430-4360-a39e-c9209077f66f == \6\7\9\6\7\8\4\e\-\2\4\3\0\-\4\3\6\0\-\a\3\9\e\-\c\9\2\0\9\0\7\7\f\6\6\f ]] 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ee04ce63-e18b-41ea-b67a-a3d5d3c20f5d == \e\e\0\4\c\e\6\3\-\e\1\8\b\-\4\1\e\a\-\b\6\7\a\-\a\3\d\5\d\3\c\2\0\f\5\d ]] 00:12:17.124 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.382 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6796784e-2430-4360-a39e-c9209077f66f 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6796784E24304360A39EC9209077F66F 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6796784E24304360A39EC9209077F66F 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:17.640 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6796784E24304360A39EC9209077F66F 00:12:17.898 [2024-11-04 16:23:44.520041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:17.898 [2024-11-04 16:23:44.520075] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:17.898 [2024-11-04 16:23:44.520083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.898 request: 00:12:17.898 { 00:12:17.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.898 "namespace": { 00:12:17.898 "bdev_name": "invalid", 00:12:17.898 "nsid": 1, 00:12:17.898 "nguid": "6796784E24304360A39EC9209077F66F", 00:12:17.898 "no_auto_visible": false 00:12:17.898 }, 00:12:17.898 "method": "nvmf_subsystem_add_ns", 00:12:17.898 "req_id": 1 00:12:17.898 } 00:12:17.898 Got JSON-RPC error response 00:12:17.898 response: 00:12:17.898 { 00:12:17.898 "code": -32602, 00:12:17.898 "message": "Invalid parameters" 00:12:17.898 } 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6796784e-2430-4360-a39e-c9209077f66f 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:17.898 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6796784E24304360A39EC9209077F66F -i 00:12:18.155 16:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:20.052 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:20.052 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:20.052 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:20.310 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2768765 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2768765 ']' 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2768765 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2768765 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2768765' 00:12:20.311 killing process with pid 2768765 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2768765 00:12:20.311 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2768765 00:12:20.569 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.826 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.827 rmmod nvme_tcp 00:12:20.827 rmmod nvme_fabrics 00:12:20.827 rmmod nvme_keyring 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2766781 ']' 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2766781 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2766781 ']' 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2766781 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2766781 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2766781' 00:12:20.827 killing process with pid 2766781 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2766781 00:12:20.827 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2766781 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.085 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.086 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.086 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.086 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.086 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.086 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.617 00:12:23.617 real 0m25.261s 00:12:23.617 user 0m30.186s 00:12:23.617 sys 0m6.711s 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.617 ************************************ 00:12:23.617 END TEST nvmf_ns_masking 00:12:23.617 ************************************ 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.617 ************************************ 00:12:23.617 START TEST nvmf_nvme_cli 00:12:23.617 ************************************ 00:12:23.617 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:23.617 * Looking for test storage... 00:12:23.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.617 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.618 --rc genhtml_branch_coverage=1 00:12:23.618 --rc genhtml_function_coverage=1 00:12:23.618 --rc genhtml_legend=1 00:12:23.618 --rc geninfo_all_blocks=1 00:12:23.618 --rc geninfo_unexecuted_blocks=1 00:12:23.618 00:12:23.618 ' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.618 --rc genhtml_branch_coverage=1 00:12:23.618 --rc genhtml_function_coverage=1 00:12:23.618 --rc genhtml_legend=1 00:12:23.618 --rc geninfo_all_blocks=1 00:12:23.618 --rc geninfo_unexecuted_blocks=1 00:12:23.618 00:12:23.618 ' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.618 --rc genhtml_branch_coverage=1 00:12:23.618 --rc genhtml_function_coverage=1 00:12:23.618 --rc genhtml_legend=1 00:12:23.618 --rc geninfo_all_blocks=1 00:12:23.618 --rc geninfo_unexecuted_blocks=1 00:12:23.618 00:12:23.618 ' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.618 --rc genhtml_branch_coverage=1 00:12:23.618 --rc genhtml_function_coverage=1 00:12:23.618 --rc genhtml_legend=1 00:12:23.618 --rc geninfo_all_blocks=1 00:12:23.618 --rc geninfo_unexecuted_blocks=1 00:12:23.618 00:12:23.618 ' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.618 16:23:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.886 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:28.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:28.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:28.887 Found net devices under 0000:86:00.0: cvl_0_0 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:28.887 Found net devices under 0000:86:00.1: cvl_0_1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.887 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.147 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:12:29.148 00:12:29.148 --- 10.0.0.2 ping statistics --- 00:12:29.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.148 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:29.148 00:12:29.148 --- 10.0.0.1 ping statistics --- 00:12:29.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.148 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2773470 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2773470 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2773470 ']' 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.148 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.148 [2024-11-04 16:23:55.850117] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:12:29.148 [2024-11-04 16:23:55.850160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.148 [2024-11-04 16:23:55.915842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.148 [2024-11-04 16:23:55.959085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.148 [2024-11-04 16:23:55.959121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.148 [2024-11-04 16:23:55.959128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.148 [2024-11-04 16:23:55.959134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.148 [2024-11-04 16:23:55.959138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.148 [2024-11-04 16:23:55.960720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.148 [2024-11-04 16:23:55.960740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.148 [2024-11-04 16:23:55.960828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.148 [2024-11-04 16:23:55.960830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 [2024-11-04 16:23:56.096392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 Malloc0 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 Malloc1 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.406 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.407 [2024-11-04 16:23:56.187631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.407 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:29.665 00:12:29.665 Discovery Log Number of Records 2, Generation counter 2 00:12:29.665 =====Discovery Log Entry 0====== 00:12:29.665 trtype: tcp 00:12:29.665 adrfam: ipv4 00:12:29.665 subtype: current discovery subsystem 00:12:29.665 treq: not required 00:12:29.665 portid: 0 00:12:29.665 trsvcid: 4420 00:12:29.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:29.665 traddr: 10.0.0.2 00:12:29.665 eflags: explicit discovery connections, duplicate discovery information 00:12:29.665 sectype: none 00:12:29.665 =====Discovery Log Entry 1====== 00:12:29.665 trtype: tcp 00:12:29.665 adrfam: ipv4 00:12:29.665 subtype: nvme subsystem 00:12:29.665 treq: not required 00:12:29.665 portid: 0 00:12:29.665 trsvcid: 4420 00:12:29.665 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:29.665 traddr: 10.0.0.2 00:12:29.665 eflags: none 00:12:29.665 sectype: none 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:29.665 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:31.039 16:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:32.939 /dev/nvme0n2 ]] 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:32.939 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:33.197 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.474 rmmod nvme_tcp 00:12:33.474 rmmod nvme_fabrics 00:12:33.474 rmmod nvme_keyring 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2773470 ']' 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2773470 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2773470 ']' 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2773470 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773470 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773470' 00:12:33.474 killing process with pid 2773470 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2773470 00:12:33.474 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2773470 00:12:33.745 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.745 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.745 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.746 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.277 00:12:36.277 real 0m12.589s 00:12:36.277 user 0m19.528s 00:12:36.277 sys 0m4.857s 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:36.277 ************************************ 00:12:36.277 END TEST nvmf_nvme_cli 00:12:36.277 ************************************ 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.277 ************************************ 00:12:36.277 START TEST nvmf_vfio_user 00:12:36.277 ************************************ 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:36.277 * Looking for test storage... 00:12:36.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.277 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.278 --rc genhtml_branch_coverage=1 00:12:36.278 --rc genhtml_function_coverage=1 00:12:36.278 --rc genhtml_legend=1 00:12:36.278 --rc geninfo_all_blocks=1 00:12:36.278 --rc geninfo_unexecuted_blocks=1 00:12:36.278 00:12:36.278 ' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.278 --rc genhtml_branch_coverage=1 00:12:36.278 --rc genhtml_function_coverage=1 00:12:36.278 --rc genhtml_legend=1 00:12:36.278 --rc geninfo_all_blocks=1 00:12:36.278 --rc geninfo_unexecuted_blocks=1 00:12:36.278 00:12:36.278 ' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.278 --rc genhtml_branch_coverage=1 00:12:36.278 --rc genhtml_function_coverage=1 00:12:36.278 --rc genhtml_legend=1 00:12:36.278 --rc geninfo_all_blocks=1 00:12:36.278 --rc geninfo_unexecuted_blocks=1 00:12:36.278 00:12:36.278 ' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.278 --rc genhtml_branch_coverage=1 00:12:36.278 --rc genhtml_function_coverage=1 00:12:36.278 --rc genhtml_legend=1 00:12:36.278 --rc geninfo_all_blocks=1 00:12:36.278 --rc geninfo_unexecuted_blocks=1 00:12:36.278 00:12:36.278 ' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2774895 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2774895' 00:12:36.278 Process pid: 2774895 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2774895 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2774895 ']' 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.278 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:36.278 [2024-11-04 16:24:02.880159] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:12:36.279 [2024-11-04 16:24:02.880206] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.279 [2024-11-04 16:24:02.943191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.279 [2024-11-04 16:24:02.985560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.279 [2024-11-04 16:24:02.985596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.279 [2024-11-04 16:24:02.985608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.279 [2024-11-04 16:24:02.985614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.279 [2024-11-04 16:24:02.985636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.279 [2024-11-04 16:24:02.987170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.279 [2024-11-04 16:24:02.987190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.279 [2024-11-04 16:24:02.987279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.279 [2024-11-04 16:24:02.987280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.279 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.279 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:36.279 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:37.651 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:37.908 Malloc1 00:12:37.908 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:37.908 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:38.166 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:38.424 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:38.424 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:38.424 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:38.681 Malloc2 00:12:38.681 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:38.939 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:38.939 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:39.197 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:39.197 [2024-11-04 16:24:05.969073] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:12:39.197 [2024-11-04 16:24:05.969120] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775377 ] 00:12:39.197 [2024-11-04 16:24:06.008093] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:39.197 [2024-11-04 16:24:06.013450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:39.197 [2024-11-04 16:24:06.013468] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9c9dc99000 00:12:39.197 [2024-11-04 16:24:06.014446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.015453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.016457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.017460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.018468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.019466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.020470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.197 [2024-11-04 16:24:06.021469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.457 [2024-11-04 16:24:06.022485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:39.457 [2024-11-04 16:24:06.022497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9c9dc8e000 00:12:39.457 [2024-11-04 16:24:06.023412] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:39.457 [2024-11-04 16:24:06.036883] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:39.457 [2024-11-04 16:24:06.036911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:39.457 [2024-11-04 16:24:06.039584] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:39.457 [2024-11-04 16:24:06.039625] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:39.457 [2024-11-04 16:24:06.039695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:39.457 [2024-11-04 16:24:06.039711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:39.457 [2024-11-04 16:24:06.039716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:39.457 [2024-11-04 16:24:06.040583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:39.457 [2024-11-04 16:24:06.040591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:39.457 [2024-11-04 16:24:06.040597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:39.457 [2024-11-04 16:24:06.041584] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:39.457 [2024-11-04 16:24:06.041592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:39.457 [2024-11-04 16:24:06.041599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.042597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:39.457 [2024-11-04 16:24:06.042609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.043608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:39.457 [2024-11-04 16:24:06.043616] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:39.457 [2024-11-04 16:24:06.043621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.043626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.043734] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:39.457 [2024-11-04 16:24:06.043739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.043743] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:39.457 [2024-11-04 16:24:06.044613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:39.457 [2024-11-04 16:24:06.045616] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:39.457 [2024-11-04 16:24:06.046618] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:39.457 [2024-11-04 16:24:06.047612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:39.457 [2024-11-04 16:24:06.047680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:39.457 [2024-11-04 16:24:06.048622] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:39.457 [2024-11-04 16:24:06.048630] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:39.457 [2024-11-04 16:24:06.048634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:39.457 [2024-11-04 16:24:06.048659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048676] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.457 [2024-11-04 16:24:06.048681] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.457 [2024-11-04 16:24:06.048684] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.457 [2024-11-04 16:24:06.048699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.457 [2024-11-04 16:24:06.048734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:39.457 [2024-11-04 16:24:06.048744] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:39.457 [2024-11-04 16:24:06.048748] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:39.457 [2024-11-04 16:24:06.048752] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:39.457 [2024-11-04 16:24:06.048756] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:39.457 [2024-11-04 16:24:06.048760] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:39.457 [2024-11-04 16:24:06.048766] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:39.457 [2024-11-04 16:24:06.048770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:39.457 [2024-11-04 16:24:06.048799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:39.457 [2024-11-04 16:24:06.048810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.457 [2024-11-04 16:24:06.048818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.457 [2024-11-04 16:24:06.048825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.457 [2024-11-04 16:24:06.048833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.457 [2024-11-04 16:24:06.048839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:39.457 [2024-11-04 16:24:06.048865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:39.457 [2024-11-04 16:24:06.048872] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:39.457 [2024-11-04 16:24:06.048877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:39.457 [2024-11-04 16:24:06.048904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:39.457 [2024-11-04 16:24:06.048953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.048968] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:39.457 [2024-11-04 16:24:06.048972] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:39.457 [2024-11-04 16:24:06.048975] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.457 [2024-11-04 16:24:06.048981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:39.457 [2024-11-04 16:24:06.048994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:39.457 [2024-11-04 16:24:06.049002] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:39.457 [2024-11-04 16:24:06.049013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.049020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:39.457 [2024-11-04 16:24:06.049025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.457 [2024-11-04 16:24:06.049029] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.457 [2024-11-04 16:24:06.049032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.457 [2024-11-04 16:24:06.049038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049084] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.458 [2024-11-04 16:24:06.049088] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.458 [2024-11-04 16:24:06.049091] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.458 [2024-11-04 16:24:06.049097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049146] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:39.458 [2024-11-04 16:24:06.049151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:39.458 [2024-11-04 16:24:06.049155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:39.458 [2024-11-04 16:24:06.049171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049251] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:39.458 [2024-11-04 16:24:06.049256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:39.458 [2024-11-04 16:24:06.049259] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:39.458 [2024-11-04 16:24:06.049262] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:39.458 [2024-11-04 16:24:06.049265] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:39.458 [2024-11-04 16:24:06.049271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:39.458 [2024-11-04 16:24:06.049277] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:39.458 [2024-11-04 16:24:06.049281] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:39.458 [2024-11-04 16:24:06.049284] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.458 [2024-11-04 16:24:06.049289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049295] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:39.458 [2024-11-04 16:24:06.049299] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.458 [2024-11-04 16:24:06.049302] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.458 [2024-11-04 16:24:06.049307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049315] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:39.458 [2024-11-04 16:24:06.049319] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:39.458 [2024-11-04 16:24:06.049322] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:39.458 [2024-11-04 16:24:06.049327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:39.458 [2024-11-04 16:24:06.049333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:39.458 [2024-11-04 16:24:06.049359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:39.458 ===================================================== 00:12:39.458 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:39.458 ===================================================== 00:12:39.458 Controller Capabilities/Features 00:12:39.458 ================================ 00:12:39.458 Vendor ID: 4e58 00:12:39.458 Subsystem Vendor ID: 4e58 00:12:39.458 Serial Number: SPDK1 00:12:39.458 Model Number: SPDK bdev Controller 00:12:39.458 Firmware Version: 25.01 00:12:39.458 Recommended Arb Burst: 6 00:12:39.458 IEEE OUI Identifier: 8d 6b 50 00:12:39.458 Multi-path I/O 00:12:39.458 May have multiple subsystem ports: Yes 00:12:39.458 May have multiple controllers: Yes 00:12:39.458 Associated with SR-IOV VF: No 00:12:39.458 Max Data Transfer Size: 131072 00:12:39.458 Max Number of Namespaces: 32 00:12:39.458 Max Number of I/O Queues: 127 00:12:39.458 NVMe Specification Version (VS): 1.3 00:12:39.458 NVMe Specification Version (Identify): 1.3 00:12:39.458 Maximum Queue Entries: 256 00:12:39.458 Contiguous Queues Required: Yes 00:12:39.458 Arbitration Mechanisms Supported 00:12:39.458 Weighted Round Robin: Not Supported 00:12:39.458 Vendor Specific: Not Supported 00:12:39.458 Reset Timeout: 15000 ms 00:12:39.458 Doorbell Stride: 4 bytes 00:12:39.458 NVM Subsystem Reset: Not Supported 00:12:39.458 Command Sets Supported 00:12:39.458 NVM Command Set: Supported 00:12:39.458 Boot Partition: Not Supported 00:12:39.458 Memory Page Size Minimum: 4096 bytes 00:12:39.458 Memory Page Size Maximum: 4096 bytes 00:12:39.458 Persistent Memory Region: Not Supported 00:12:39.458 Optional Asynchronous Events Supported 00:12:39.458 Namespace Attribute Notices: Supported 00:12:39.458 Firmware Activation Notices: Not Supported 00:12:39.458 ANA Change Notices: Not Supported 00:12:39.458 PLE Aggregate Log Change Notices: Not Supported 00:12:39.458 LBA Status Info Alert Notices: Not Supported 00:12:39.458 EGE Aggregate Log Change Notices: Not Supported 00:12:39.458 Normal NVM Subsystem Shutdown event: Not Supported 00:12:39.458 Zone Descriptor Change Notices: Not Supported 00:12:39.458 Discovery Log Change Notices: Not Supported 00:12:39.458 Controller Attributes 00:12:39.458 128-bit Host Identifier: Supported 00:12:39.458 Non-Operational Permissive Mode: Not Supported 00:12:39.458 NVM Sets: Not Supported 00:12:39.458 Read Recovery Levels: Not Supported 00:12:39.458 Endurance Groups: Not Supported 00:12:39.458 Predictable Latency Mode: Not Supported 00:12:39.458 Traffic Based Keep ALive: Not Supported 00:12:39.458 Namespace Granularity: Not Supported 00:12:39.458 SQ Associations: Not Supported 00:12:39.458 UUID List: Not Supported 00:12:39.458 Multi-Domain Subsystem: Not Supported 00:12:39.458 Fixed Capacity Management: Not Supported 00:12:39.458 Variable Capacity Management: Not Supported 00:12:39.458 Delete Endurance Group: Not Supported 00:12:39.458 Delete NVM Set: Not Supported 00:12:39.458 Extended LBA Formats Supported: Not Supported 00:12:39.458 Flexible Data Placement Supported: Not Supported 00:12:39.458 00:12:39.458 Controller Memory Buffer Support 00:12:39.458 ================================ 00:12:39.458 Supported: No 00:12:39.458 00:12:39.458 Persistent Memory Region Support 00:12:39.458 ================================ 00:12:39.458 Supported: No 00:12:39.458 00:12:39.458 Admin Command Set Attributes 00:12:39.458 ============================ 00:12:39.458 Security Send/Receive: Not Supported 00:12:39.458 Format NVM: Not Supported 00:12:39.458 Firmware Activate/Download: Not Supported 00:12:39.458 Namespace Management: Not Supported 00:12:39.458 Device Self-Test: Not Supported 00:12:39.458 Directives: Not Supported 00:12:39.458 NVMe-MI: Not Supported 00:12:39.458 Virtualization Management: Not Supported 00:12:39.458 Doorbell Buffer Config: Not Supported 00:12:39.458 Get LBA Status Capability: Not Supported 00:12:39.458 Command & Feature Lockdown Capability: Not Supported 00:12:39.459 Abort Command Limit: 4 00:12:39.459 Async Event Request Limit: 4 00:12:39.459 Number of Firmware Slots: N/A 00:12:39.459 Firmware Slot 1 Read-Only: N/A 00:12:39.459 Firmware Activation Without Reset: N/A 00:12:39.459 Multiple Update Detection Support: N/A 00:12:39.459 Firmware Update Granularity: No Information Provided 00:12:39.459 Per-Namespace SMART Log: No 00:12:39.459 Asymmetric Namespace Access Log Page: Not Supported 00:12:39.459 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:39.459 Command Effects Log Page: Supported 00:12:39.459 Get Log Page Extended Data: Supported 00:12:39.459 Telemetry Log Pages: Not Supported 00:12:39.459 Persistent Event Log Pages: Not Supported 00:12:39.459 Supported Log Pages Log Page: May Support 00:12:39.459 Commands Supported & Effects Log Page: Not Supported 00:12:39.459 Feature Identifiers & Effects Log Page:May Support 00:12:39.459 NVMe-MI Commands & Effects Log Page: May Support 00:12:39.459 Data Area 4 for Telemetry Log: Not Supported 00:12:39.459 Error Log Page Entries Supported: 128 00:12:39.459 Keep Alive: Supported 00:12:39.459 Keep Alive Granularity: 10000 ms 00:12:39.459 00:12:39.459 NVM Command Set Attributes 00:12:39.459 ========================== 00:12:39.459 Submission Queue Entry Size 00:12:39.459 Max: 64 00:12:39.459 Min: 64 00:12:39.459 Completion Queue Entry Size 00:12:39.459 Max: 16 00:12:39.459 Min: 16 00:12:39.459 Number of Namespaces: 32 00:12:39.459 Compare Command: Supported 00:12:39.459 Write Uncorrectable Command: Not Supported 00:12:39.459 Dataset Management Command: Supported 00:12:39.459 Write Zeroes Command: Supported 00:12:39.459 Set Features Save Field: Not Supported 00:12:39.459 Reservations: Not Supported 00:12:39.459 Timestamp: Not Supported 00:12:39.459 Copy: Supported 00:12:39.459 Volatile Write Cache: Present 00:12:39.459 Atomic Write Unit (Normal): 1 00:12:39.459 Atomic Write Unit (PFail): 1 00:12:39.459 Atomic Compare & Write Unit: 1 00:12:39.459 Fused Compare & Write: Supported 00:12:39.459 Scatter-Gather List 00:12:39.459 SGL Command Set: Supported (Dword aligned) 00:12:39.459 SGL Keyed: Not Supported 00:12:39.459 SGL Bit Bucket Descriptor: Not Supported 00:12:39.459 SGL Metadata Pointer: Not Supported 00:12:39.459 Oversized SGL: Not Supported 00:12:39.459 SGL Metadata Address: Not Supported 00:12:39.459 SGL Offset: Not Supported 00:12:39.459 Transport SGL Data Block: Not Supported 00:12:39.459 Replay Protected Memory Block: Not Supported 00:12:39.459 00:12:39.459 Firmware Slot Information 00:12:39.459 ========================= 00:12:39.459 Active slot: 1 00:12:39.459 Slot 1 Firmware Revision: 25.01 00:12:39.459 00:12:39.459 00:12:39.459 Commands Supported and Effects 00:12:39.459 ============================== 00:12:39.459 Admin Commands 00:12:39.459 -------------- 00:12:39.459 Get Log Page (02h): Supported 00:12:39.459 Identify (06h): Supported 00:12:39.459 Abort (08h): Supported 00:12:39.459 Set Features (09h): Supported 00:12:39.459 Get Features (0Ah): Supported 00:12:39.459 Asynchronous Event Request (0Ch): Supported 00:12:39.459 Keep Alive (18h): Supported 00:12:39.459 I/O Commands 00:12:39.459 ------------ 00:12:39.459 Flush (00h): Supported LBA-Change 00:12:39.459 Write (01h): Supported LBA-Change 00:12:39.459 Read (02h): Supported 00:12:39.459 Compare (05h): Supported 00:12:39.459 Write Zeroes (08h): Supported LBA-Change 00:12:39.459 Dataset Management (09h): Supported LBA-Change 00:12:39.459 Copy (19h): Supported LBA-Change 00:12:39.459 00:12:39.459 Error Log 00:12:39.459 ========= 00:12:39.459 00:12:39.459 Arbitration 00:12:39.459 =========== 00:12:39.459 Arbitration Burst: 1 00:12:39.459 00:12:39.459 Power Management 00:12:39.459 ================ 00:12:39.459 Number of Power States: 1 00:12:39.459 Current Power State: Power State #0 00:12:39.459 Power State #0: 00:12:39.459 Max Power: 0.00 W 00:12:39.459 Non-Operational State: Operational 00:12:39.459 Entry Latency: Not Reported 00:12:39.459 Exit Latency: Not Reported 00:12:39.459 Relative Read Throughput: 0 00:12:39.459 Relative Read Latency: 0 00:12:39.459 Relative Write Throughput: 0 00:12:39.459 Relative Write Latency: 0 00:12:39.459 Idle Power: Not Reported 00:12:39.459 Active Power: Not Reported 00:12:39.459 Non-Operational Permissive Mode: Not Supported 00:12:39.459 00:12:39.459 Health Information 00:12:39.459 ================== 00:12:39.459 Critical Warnings: 00:12:39.459 Available Spare Space: OK 00:12:39.459 Temperature: OK 00:12:39.459 Device Reliability: OK 00:12:39.459 Read Only: No 00:12:39.459 Volatile Memory Backup: OK 00:12:39.459 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:39.459 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:39.459 Available Spare: 0% 00:12:39.459 Available Sp[2024-11-04 16:24:06.049439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:39.459 [2024-11-04 16:24:06.049450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:39.459 [2024-11-04 16:24:06.049474] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:39.459 [2024-11-04 16:24:06.049483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.459 [2024-11-04 16:24:06.049488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.459 [2024-11-04 16:24:06.049494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.459 [2024-11-04 16:24:06.049499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.459 [2024-11-04 16:24:06.052610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:39.459 [2024-11-04 16:24:06.052623] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:39.459 [2024-11-04 16:24:06.053646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:39.459 [2024-11-04 16:24:06.053698] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:39.459 [2024-11-04 16:24:06.053705] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:39.459 [2024-11-04 16:24:06.054654] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:39.459 [2024-11-04 16:24:06.054664] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:39.459 [2024-11-04 16:24:06.054712] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:39.459 [2024-11-04 16:24:06.055679] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:39.459 are Threshold: 0% 00:12:39.459 Life Percentage Used: 0% 00:12:39.459 Data Units Read: 0 00:12:39.459 Data Units Written: 0 00:12:39.459 Host Read Commands: 0 00:12:39.459 Host Write Commands: 0 00:12:39.459 Controller Busy Time: 0 minutes 00:12:39.459 Power Cycles: 0 00:12:39.459 Power On Hours: 0 hours 00:12:39.459 Unsafe Shutdowns: 0 00:12:39.459 Unrecoverable Media Errors: 0 00:12:39.459 Lifetime Error Log Entries: 0 00:12:39.459 Warning Temperature Time: 0 minutes 00:12:39.459 Critical Temperature Time: 0 minutes 00:12:39.459 00:12:39.459 Number of Queues 00:12:39.459 ================ 00:12:39.459 Number of I/O Submission Queues: 127 00:12:39.459 Number of I/O Completion Queues: 127 00:12:39.459 00:12:39.459 Active Namespaces 00:12:39.459 ================= 00:12:39.459 Namespace ID:1 00:12:39.459 Error Recovery Timeout: Unlimited 00:12:39.459 Command Set Identifier: NVM (00h) 00:12:39.459 Deallocate: Supported 00:12:39.459 Deallocated/Unwritten Error: Not Supported 00:12:39.459 Deallocated Read Value: Unknown 00:12:39.459 Deallocate in Write Zeroes: Not Supported 00:12:39.459 Deallocated Guard Field: 0xFFFF 00:12:39.459 Flush: Supported 00:12:39.459 Reservation: Supported 00:12:39.459 Namespace Sharing Capabilities: Multiple Controllers 00:12:39.459 Size (in LBAs): 131072 (0GiB) 00:12:39.459 Capacity (in LBAs): 131072 (0GiB) 00:12:39.459 Utilization (in LBAs): 131072 (0GiB) 00:12:39.459 NGUID: 641265CB68C045379376B8CA2DE0E883 00:12:39.459 UUID: 641265cb-68c0-4537-9376-b8ca2de0e883 00:12:39.459 Thin Provisioning: Not Supported 00:12:39.459 Per-NS Atomic Units: Yes 00:12:39.459 Atomic Boundary Size (Normal): 0 00:12:39.459 Atomic Boundary Size (PFail): 0 00:12:39.459 Atomic Boundary Offset: 0 00:12:39.459 Maximum Single Source Range Length: 65535 00:12:39.459 Maximum Copy Length: 65535 00:12:39.459 Maximum Source Range Count: 1 00:12:39.460 NGUID/EUI64 Never Reused: No 00:12:39.460 Namespace Write Protected: No 00:12:39.460 Number of LBA Formats: 1 00:12:39.460 Current LBA Format: LBA Format #00 00:12:39.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:39.460 00:12:39.460 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:39.716 [2024-11-04 16:24:06.288669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.976 Initializing NVMe Controllers 00:12:44.976 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.976 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:44.976 Initialization complete. Launching workers. 00:12:44.976 ======================================================== 00:12:44.976 Latency(us) 00:12:44.976 Device Information : IOPS MiB/s Average min max 00:12:44.976 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39933.58 155.99 3204.92 947.12 6857.18 00:12:44.976 ======================================================== 00:12:44.976 Total : 39933.58 155.99 3204.92 947.12 6857.18 00:12:44.976 00:12:44.976 [2024-11-04 16:24:11.308879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.976 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:44.976 [2024-11-04 16:24:11.543964] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.243 Initializing NVMe Controllers 00:12:50.243 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:50.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:50.243 Initialization complete. Launching workers. 00:12:50.243 ======================================================== 00:12:50.243 Latency(us) 00:12:50.243 Device Information : IOPS MiB/s Average min max 00:12:50.243 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.18 62.65 7986.34 7782.56 11972.71 00:12:50.243 ======================================================== 00:12:50.243 Total : 16038.18 62.65 7986.34 7782.56 11972.71 00:12:50.243 00:12:50.243 [2024-11-04 16:24:16.582414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.243 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:50.243 [2024-11-04 16:24:16.796432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.507 [2024-11-04 16:24:21.872918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.507 Initializing NVMe Controllers 00:12:55.507 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.507 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.507 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:55.507 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:55.507 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:55.507 Initialization complete. Launching workers. 00:12:55.507 Starting thread on core 2 00:12:55.507 Starting thread on core 3 00:12:55.507 Starting thread on core 1 00:12:55.507 16:24:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:55.507 [2024-11-04 16:24:22.169051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.692 [2024-11-04 16:24:26.069822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.692 Initializing NVMe Controllers 00:12:59.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:59.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:59.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:59.692 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:59.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:59.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:59.692 Initialization complete. Launching workers. 00:12:59.692 Starting thread on core 1 with urgent priority queue 00:12:59.692 Starting thread on core 2 with urgent priority queue 00:12:59.692 Starting thread on core 3 with urgent priority queue 00:12:59.692 Starting thread on core 0 with urgent priority queue 00:12:59.692 SPDK bdev Controller (SPDK1 ) core 0: 5553.00 IO/s 18.01 secs/100000 ios 00:12:59.692 SPDK bdev Controller (SPDK1 ) core 1: 5378.00 IO/s 18.59 secs/100000 ios 00:12:59.692 SPDK bdev Controller (SPDK1 ) core 2: 5668.67 IO/s 17.64 secs/100000 ios 00:12:59.692 SPDK bdev Controller (SPDK1 ) core 3: 5080.67 IO/s 19.68 secs/100000 ios 00:12:59.692 ======================================================== 00:12:59.692 00:12:59.692 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:59.692 [2024-11-04 16:24:26.357031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.692 Initializing NVMe Controllers 00:12:59.692 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.692 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.692 Namespace ID: 1 size: 0GB 00:12:59.692 Initialization complete. 00:12:59.692 INFO: using host memory buffer for IO 00:12:59.692 Hello world! 00:12:59.692 [2024-11-04 16:24:26.393289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.692 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:59.948 [2024-11-04 16:24:26.677095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.879 Initializing NVMe Controllers 00:13:00.879 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.879 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.879 Initialization complete. Launching workers. 00:13:00.879 submit (in ns) avg, min, max = 6885.6, 3142.9, 3999939.0 00:13:00.879 complete (in ns) avg, min, max = 20372.0, 1733.3, 3999225.7 00:13:00.879 00:13:00.879 Submit histogram 00:13:00.879 ================ 00:13:00.879 Range in us Cumulative Count 00:13:00.879 3.139 - 3.154: 0.0120% ( 2) 00:13:00.879 3.154 - 3.170: 0.0301% ( 3) 00:13:00.879 3.170 - 3.185: 0.0482% ( 3) 00:13:00.879 3.185 - 3.200: 0.0904% ( 7) 00:13:00.879 3.200 - 3.215: 0.3734% ( 47) 00:13:00.879 3.215 - 3.230: 1.6082% ( 205) 00:13:00.879 3.230 - 3.246: 3.9935% ( 396) 00:13:00.879 3.246 - 3.261: 7.4389% ( 572) 00:13:00.879 3.261 - 3.276: 12.6551% ( 866) 00:13:00.879 3.276 - 3.291: 18.7688% ( 1015) 00:13:00.879 3.291 - 3.307: 24.8524% ( 1010) 00:13:00.879 3.307 - 3.322: 31.4299% ( 1092) 00:13:00.879 3.322 - 3.337: 37.8870% ( 1072) 00:13:00.879 3.337 - 3.352: 44.1754% ( 1044) 00:13:00.879 3.352 - 3.368: 50.3072% ( 1018) 00:13:00.879 3.368 - 3.383: 56.9269% ( 1099) 00:13:00.879 3.383 - 3.398: 62.8599% ( 985) 00:13:00.879 3.398 - 3.413: 68.4616% ( 930) 00:13:00.879 3.413 - 3.429: 74.1417% ( 943) 00:13:00.879 3.429 - 3.444: 78.3520% ( 699) 00:13:00.879 3.444 - 3.459: 81.6227% ( 543) 00:13:00.879 3.459 - 3.474: 84.2609% ( 438) 00:13:00.879 3.474 - 3.490: 86.0077% ( 290) 00:13:00.879 3.490 - 3.505: 86.9775% ( 161) 00:13:00.879 3.505 - 3.520: 87.5738% ( 99) 00:13:00.879 3.520 - 3.535: 88.0496% ( 79) 00:13:00.879 3.535 - 3.550: 88.6158% ( 94) 00:13:00.879 3.550 - 3.566: 89.2844% ( 111) 00:13:00.879 3.566 - 3.581: 90.1277% ( 140) 00:13:00.879 3.581 - 3.596: 91.1216% ( 165) 00:13:00.879 3.596 - 3.611: 92.1515% ( 171) 00:13:00.879 3.611 - 3.627: 93.1394% ( 164) 00:13:00.879 3.627 - 3.642: 94.1332% ( 165) 00:13:00.879 3.642 - 3.657: 95.1030% ( 161) 00:13:00.879 3.657 - 3.672: 95.9643% ( 143) 00:13:00.879 3.672 - 3.688: 96.7775% ( 135) 00:13:00.879 3.688 - 3.703: 97.4461% ( 111) 00:13:00.879 3.703 - 3.718: 98.0785% ( 105) 00:13:00.879 3.718 - 3.733: 98.6026% ( 87) 00:13:00.879 3.733 - 3.749: 98.9941% ( 65) 00:13:00.879 3.749 - 3.764: 99.1808% ( 31) 00:13:00.879 3.764 - 3.779: 99.3916% ( 35) 00:13:00.879 3.779 - 3.794: 99.4940% ( 17) 00:13:00.879 3.794 - 3.810: 99.5904% ( 16) 00:13:00.879 3.810 - 3.825: 99.6386% ( 8) 00:13:00.879 3.825 - 3.840: 99.6928% ( 9) 00:13:00.879 3.840 - 3.855: 99.7049% ( 2) 00:13:00.879 3.855 - 3.870: 99.7109% ( 1) 00:13:00.879 3.870 - 3.886: 99.7169% ( 1) 00:13:00.879 3.886 - 3.901: 99.7229% ( 1) 00:13:00.879 3.901 - 3.931: 99.7289% ( 1) 00:13:00.879 5.364 - 5.394: 99.7350% ( 1) 00:13:00.880 5.516 - 5.547: 99.7410% ( 1) 00:13:00.880 5.608 - 5.638: 99.7470% ( 1) 00:13:00.880 5.730 - 5.760: 99.7530% ( 1) 00:13:00.880 5.790 - 5.821: 99.7591% ( 1) 00:13:00.880 5.973 - 6.004: 99.7651% ( 1) 00:13:00.880 6.004 - 6.034: 99.7711% ( 1) 00:13:00.880 6.034 - 6.065: 99.7832% ( 2) 00:13:00.880 6.065 - 6.095: 99.7892% ( 1) 00:13:00.880 6.217 - 6.248: 99.8012% ( 2) 00:13:00.880 6.461 - 6.491: 99.8073% ( 1) 00:13:00.880 6.583 - 6.613: 99.8133% ( 1) 00:13:00.880 6.613 - 6.644: 99.8193% ( 1) 00:13:00.880 7.223 - 7.253: 99.8253% ( 1) 00:13:00.880 7.253 - 7.284: 99.8313% ( 1) 00:13:00.880 7.467 - 7.497: 99.8434% ( 2) 00:13:00.880 7.528 - 7.558: 99.8554% ( 2) 00:13:00.880 7.650 - 7.680: 99.8615% ( 1) 00:13:00.880 7.710 - 7.741: 99.8675% ( 1) 00:13:00.880 7.771 - 7.802: 99.8735% ( 1) 00:13:00.880 8.716 - 8.777: 99.8795% ( 1) 00:13:00.880 8.838 - 8.899: 99.8856% ( 1) 00:13:00.880 9.996 - 10.057: 99.8916% ( 1) 00:13:00.880 13.653 - 13.714: 99.8976% ( 1) 00:13:00.880 15.177 - 15.238: 99.9036% ( 1) 00:13:00.880 19.017 - 19.139: 99.9096% ( 1) 00:13:00.880 2012.891 - 2028.495: 99.9157% ( 1) 00:13:00.880 3994.575 - 4025.783: 100.0000% ( 14) 00:13:00.880 00:13:00.880 [2024-11-04 16:24:27.697024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.138 Complete histogram 00:13:01.138 ================== 00:13:01.138 Range in us Cumulative Count 00:13:01.138 1.730 - 1.737: 0.0060% ( 1) 00:13:01.138 1.737 - 1.745: 0.0181% ( 2) 00:13:01.138 1.745 - 1.752: 0.0241% ( 1) 00:13:01.138 1.760 - 1.768: 0.1687% ( 24) 00:13:01.138 1.768 - 1.775: 0.6023% ( 72) 00:13:01.138 1.775 - 1.783: 1.0842% ( 80) 00:13:01.138 1.783 - 1.790: 1.9094% ( 137) 00:13:01.138 1.790 - 1.798: 2.5720% ( 110) 00:13:01.138 1.798 - 1.806: 3.3129% ( 123) 00:13:01.138 1.806 - 1.813: 7.7762% ( 741) 00:13:01.138 1.813 - 1.821: 32.3214% ( 4075) 00:13:01.138 1.821 - 1.829: 67.8171% ( 5893) 00:13:01.138 1.829 - 1.836: 85.8511% ( 2994) 00:13:01.138 1.836 - 1.844: 91.1336% ( 877) 00:13:01.138 1.844 - 1.851: 94.0850% ( 490) 00:13:01.138 1.851 - 1.859: 95.9945% ( 317) 00:13:01.138 1.859 - 1.867: 96.7293% ( 122) 00:13:01.138 1.867 - 1.874: 97.0666% ( 56) 00:13:01.138 1.874 - 1.882: 97.3136% ( 41) 00:13:01.138 1.882 - 1.890: 97.6690% ( 59) 00:13:01.138 1.890 - 1.897: 98.1448% ( 79) 00:13:01.138 1.897 - 1.905: 98.6327% ( 81) 00:13:01.138 1.905 - 1.912: 98.9339% ( 50) 00:13:01.138 1.912 - 1.920: 99.1085% ( 29) 00:13:01.138 1.920 - 1.928: 99.1387% ( 5) 00:13:01.138 1.928 - 1.935: 99.1748% ( 6) 00:13:01.138 1.935 - 1.943: 99.1929% ( 3) 00:13:01.138 1.943 - 1.950: 99.2170% ( 4) 00:13:01.138 1.950 - 1.966: 99.2651% ( 8) 00:13:01.138 1.966 - 1.981: 99.2712% ( 1) 00:13:01.138 1.981 - 1.996: 99.2772% ( 1) 00:13:01.138 2.011 - 2.027: 99.2832% ( 1) 00:13:01.138 2.027 - 2.042: 99.2892% ( 1) 00:13:01.138 2.042 - 2.057: 99.2953% ( 1) 00:13:01.138 2.088 - 2.103: 99.3013% ( 1) 00:13:01.138 2.103 - 2.118: 99.3073% ( 1) 00:13:01.138 2.179 - 2.194: 99.3133% ( 1) 00:13:01.138 3.688 - 3.703: 99.3194% ( 1) 00:13:01.138 3.749 - 3.764: 99.3254% ( 1) 00:13:01.138 3.764 - 3.779: 99.3314% ( 1) 00:13:01.138 3.840 - 3.855: 99.3374% ( 1) 00:13:01.138 3.886 - 3.901: 99.3435% ( 1) 00:13:01.138 3.992 - 4.023: 99.3495% ( 1) 00:13:01.138 4.267 - 4.297: 99.3555% ( 1) 00:13:01.138 4.297 - 4.328: 99.3615% ( 1) 00:13:01.138 4.358 - 4.389: 99.3675% ( 1) 00:13:01.138 4.419 - 4.450: 99.3736% ( 1) 00:13:01.138 4.602 - 4.632: 99.3796% ( 1) 00:13:01.138 4.876 - 4.907: 99.3916% ( 2) 00:13:01.138 4.998 - 5.029: 99.3977% ( 1) 00:13:01.138 5.090 - 5.120: 99.4097% ( 2) 00:13:01.138 5.394 - 5.425: 99.4157% ( 1) 00:13:01.138 5.486 - 5.516: 99.4218% ( 1) 00:13:01.138 5.638 - 5.669: 99.4278% ( 1) 00:13:01.138 5.699 - 5.730: 99.4458% ( 3) 00:13:01.138 5.790 - 5.821: 99.4519% ( 1) 00:13:01.138 6.095 - 6.126: 99.4579% ( 1) 00:13:01.138 6.156 - 6.187: 99.4639% ( 1) 00:13:01.138 6.278 - 6.309: 99.4699% ( 1) 00:13:01.138 6.461 - 6.491: 99.4760% ( 1) 00:13:01.138 6.583 - 6.613: 99.4820% ( 1) 00:13:01.138 6.857 - 6.888: 99.4880% ( 1) 00:13:01.138 6.918 - 6.949: 99.4940% ( 1) 00:13:01.138 7.375 - 7.406: 99.5001% ( 1) 00:13:01.138 8.046 - 8.107: 99.5061% ( 1) 00:13:01.138 10.179 - 10.240: 99.5121% ( 1) 00:13:01.138 10.301 - 10.362: 99.5181% ( 1) 00:13:01.138 14.385 - 14.446: 99.5242% ( 1) 00:13:01.138 41.935 - 42.179: 99.5302% ( 1) 00:13:01.138 2028.495 - 2044.099: 99.5362% ( 1) 00:13:01.138 2168.930 - 2184.533: 99.5422% ( 1) 00:13:01.138 3947.764 - 3963.368: 99.5482% ( 1) 00:13:01.138 3994.575 - 4025.783: 100.0000% ( 75) 00:13:01.138 00:13:01.138 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:01.138 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:01.138 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:01.138 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:01.138 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:01.138 [ 00:13:01.138 { 00:13:01.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:01.138 "subtype": "Discovery", 00:13:01.138 "listen_addresses": [], 00:13:01.138 "allow_any_host": true, 00:13:01.138 "hosts": [] 00:13:01.138 }, 00:13:01.138 { 00:13:01.138 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:01.138 "subtype": "NVMe", 00:13:01.138 "listen_addresses": [ 00:13:01.138 { 00:13:01.138 "trtype": "VFIOUSER", 00:13:01.138 "adrfam": "IPv4", 00:13:01.138 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:01.138 "trsvcid": "0" 00:13:01.138 } 00:13:01.138 ], 00:13:01.138 "allow_any_host": true, 00:13:01.138 "hosts": [], 00:13:01.138 "serial_number": "SPDK1", 00:13:01.138 "model_number": "SPDK bdev Controller", 00:13:01.138 "max_namespaces": 32, 00:13:01.138 "min_cntlid": 1, 00:13:01.138 "max_cntlid": 65519, 00:13:01.138 "namespaces": [ 00:13:01.138 { 00:13:01.138 "nsid": 1, 00:13:01.138 "bdev_name": "Malloc1", 00:13:01.138 "name": "Malloc1", 00:13:01.138 "nguid": "641265CB68C045379376B8CA2DE0E883", 00:13:01.138 "uuid": "641265cb-68c0-4537-9376-b8ca2de0e883" 00:13:01.138 } 00:13:01.138 ] 00:13:01.138 }, 00:13:01.138 { 00:13:01.138 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:01.138 "subtype": "NVMe", 00:13:01.138 "listen_addresses": [ 00:13:01.138 { 00:13:01.138 "trtype": "VFIOUSER", 00:13:01.139 "adrfam": "IPv4", 00:13:01.139 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:01.139 "trsvcid": "0" 00:13:01.139 } 00:13:01.139 ], 00:13:01.139 "allow_any_host": true, 00:13:01.139 "hosts": [], 00:13:01.139 "serial_number": "SPDK2", 00:13:01.139 "model_number": "SPDK bdev Controller", 00:13:01.139 "max_namespaces": 32, 00:13:01.139 "min_cntlid": 1, 00:13:01.139 "max_cntlid": 65519, 00:13:01.139 "namespaces": [ 00:13:01.139 { 00:13:01.139 "nsid": 1, 00:13:01.139 "bdev_name": "Malloc2", 00:13:01.139 "name": "Malloc2", 00:13:01.139 "nguid": "73B658BF50FE484994C5F40B4281BCED", 00:13:01.139 "uuid": "73b658bf-50fe-4849-94c5-f40b4281bced" 00:13:01.139 } 00:13:01.139 ] 00:13:01.139 } 00:13:01.139 ] 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2779438 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:01.139 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:01.397 [2024-11-04 16:24:28.090045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.397 Malloc3 00:13:01.397 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:01.655 [2024-11-04 16:24:28.339902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.655 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:01.655 Asynchronous Event Request test 00:13:01.655 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.655 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.655 Registering asynchronous event callbacks... 00:13:01.655 Starting namespace attribute notice tests for all controllers... 00:13:01.655 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:01.655 aer_cb - Changed Namespace 00:13:01.655 Cleaning up... 00:13:01.915 [ 00:13:01.915 { 00:13:01.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:01.915 "subtype": "Discovery", 00:13:01.915 "listen_addresses": [], 00:13:01.915 "allow_any_host": true, 00:13:01.915 "hosts": [] 00:13:01.915 }, 00:13:01.915 { 00:13:01.915 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:01.915 "subtype": "NVMe", 00:13:01.915 "listen_addresses": [ 00:13:01.915 { 00:13:01.915 "trtype": "VFIOUSER", 00:13:01.915 "adrfam": "IPv4", 00:13:01.915 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:01.915 "trsvcid": "0" 00:13:01.915 } 00:13:01.915 ], 00:13:01.915 "allow_any_host": true, 00:13:01.915 "hosts": [], 00:13:01.915 "serial_number": "SPDK1", 00:13:01.915 "model_number": "SPDK bdev Controller", 00:13:01.915 "max_namespaces": 32, 00:13:01.915 "min_cntlid": 1, 00:13:01.915 "max_cntlid": 65519, 00:13:01.915 "namespaces": [ 00:13:01.915 { 00:13:01.915 "nsid": 1, 00:13:01.915 "bdev_name": "Malloc1", 00:13:01.915 "name": "Malloc1", 00:13:01.915 "nguid": "641265CB68C045379376B8CA2DE0E883", 00:13:01.915 "uuid": "641265cb-68c0-4537-9376-b8ca2de0e883" 00:13:01.915 }, 00:13:01.915 { 00:13:01.915 "nsid": 2, 00:13:01.915 "bdev_name": "Malloc3", 00:13:01.915 "name": "Malloc3", 00:13:01.915 "nguid": "7C8974F965124A33AAA1E01C0A341E46", 00:13:01.915 "uuid": "7c8974f9-6512-4a33-aaa1-e01c0a341e46" 00:13:01.915 } 00:13:01.915 ] 00:13:01.915 }, 00:13:01.915 { 00:13:01.915 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:01.915 "subtype": "NVMe", 00:13:01.915 "listen_addresses": [ 00:13:01.915 { 00:13:01.915 "trtype": "VFIOUSER", 00:13:01.915 "adrfam": "IPv4", 00:13:01.915 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:01.915 "trsvcid": "0" 00:13:01.915 } 00:13:01.915 ], 00:13:01.915 "allow_any_host": true, 00:13:01.915 "hosts": [], 00:13:01.915 "serial_number": "SPDK2", 00:13:01.915 "model_number": "SPDK bdev Controller", 00:13:01.915 "max_namespaces": 32, 00:13:01.915 "min_cntlid": 1, 00:13:01.915 "max_cntlid": 65519, 00:13:01.915 "namespaces": [ 00:13:01.915 { 00:13:01.915 "nsid": 1, 00:13:01.915 "bdev_name": "Malloc2", 00:13:01.915 "name": "Malloc2", 00:13:01.915 "nguid": "73B658BF50FE484994C5F40B4281BCED", 00:13:01.915 "uuid": "73b658bf-50fe-4849-94c5-f40b4281bced" 00:13:01.915 } 00:13:01.915 ] 00:13:01.915 } 00:13:01.915 ] 00:13:01.915 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2779438 00:13:01.915 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.915 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:01.915 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:01.915 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:01.915 [2024-11-04 16:24:28.586323] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:13:01.915 [2024-11-04 16:24:28.586356] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779450 ] 00:13:01.915 [2024-11-04 16:24:28.624941] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:01.915 [2024-11-04 16:24:28.630196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:01.915 [2024-11-04 16:24:28.630216] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f09ee386000 00:13:01.915 [2024-11-04 16:24:28.631202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.632209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.633212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.634223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.635228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.636240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.637246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.638254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.915 [2024-11-04 16:24:28.639260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:01.915 [2024-11-04 16:24:28.639273] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f09ee37b000 00:13:01.915 [2024-11-04 16:24:28.640186] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:01.915 [2024-11-04 16:24:28.649541] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:01.915 [2024-11-04 16:24:28.649568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:01.915 [2024-11-04 16:24:28.653653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:01.915 [2024-11-04 16:24:28.653690] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:01.915 [2024-11-04 16:24:28.653762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:01.915 [2024-11-04 16:24:28.653776] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:01.915 [2024-11-04 16:24:28.653781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:01.915 [2024-11-04 16:24:28.654658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:01.915 [2024-11-04 16:24:28.654668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:01.915 [2024-11-04 16:24:28.654675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:01.915 [2024-11-04 16:24:28.655667] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:01.915 [2024-11-04 16:24:28.655676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:01.915 [2024-11-04 16:24:28.655683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:01.915 [2024-11-04 16:24:28.656681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:01.915 [2024-11-04 16:24:28.656690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:01.915 [2024-11-04 16:24:28.657690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:01.915 [2024-11-04 16:24:28.657700] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:01.916 [2024-11-04 16:24:28.657704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:01.916 [2024-11-04 16:24:28.657710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:01.916 [2024-11-04 16:24:28.657817] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:01.916 [2024-11-04 16:24:28.657822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:01.916 [2024-11-04 16:24:28.657826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:01.916 [2024-11-04 16:24:28.660606] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:01.916 [2024-11-04 16:24:28.660715] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:01.916 [2024-11-04 16:24:28.661720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:01.916 [2024-11-04 16:24:28.662728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:01.916 [2024-11-04 16:24:28.662766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:01.916 [2024-11-04 16:24:28.663742] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:01.916 [2024-11-04 16:24:28.663751] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:01.916 [2024-11-04 16:24:28.663758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.663775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:01.916 [2024-11-04 16:24:28.663782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.663793] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.916 [2024-11-04 16:24:28.663798] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.916 [2024-11-04 16:24:28.663801] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:01.916 [2024-11-04 16:24:28.663812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.671608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.671621] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:01.916 [2024-11-04 16:24:28.671626] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:01.916 [2024-11-04 16:24:28.671630] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:01.916 [2024-11-04 16:24:28.671634] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:01.916 [2024-11-04 16:24:28.671641] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:01.916 [2024-11-04 16:24:28.671648] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:01.916 [2024-11-04 16:24:28.671654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.671662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.671671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.679628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.916 [2024-11-04 16:24:28.679635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.916 [2024-11-04 16:24:28.679643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.916 [2024-11-04 16:24:28.679651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.916 [2024-11-04 16:24:28.679655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.679661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.679669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.687607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.687620] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:01.916 [2024-11-04 16:24:28.687625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.687631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.687636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.687644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.695616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.695671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.695678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.695685] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:01.916 [2024-11-04 16:24:28.695689] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:01.916 [2024-11-04 16:24:28.695692] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:01.916 [2024-11-04 16:24:28.695698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.703607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.703617] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:01.916 [2024-11-04 16:24:28.703628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.703635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.703641] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.916 [2024-11-04 16:24:28.703645] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.916 [2024-11-04 16:24:28.703648] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:01.916 [2024-11-04 16:24:28.703654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.711606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.711620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.711629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.711636] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.916 [2024-11-04 16:24:28.711640] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.916 [2024-11-04 16:24:28.711645] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:01.916 [2024-11-04 16:24:28.711651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.719605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.719615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719647] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:01.916 [2024-11-04 16:24:28.719651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:01.916 [2024-11-04 16:24:28.719656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:01.916 [2024-11-04 16:24:28.719670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:01.916 [2024-11-04 16:24:28.727605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:01.916 [2024-11-04 16:24:28.727617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:01.917 [2024-11-04 16:24:28.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:01.917 [2024-11-04 16:24:28.735618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:02.176 [2024-11-04 16:24:28.743607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:02.176 [2024-11-04 16:24:28.743619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.176 [2024-11-04 16:24:28.751609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:02.176 [2024-11-04 16:24:28.751624] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:02.176 [2024-11-04 16:24:28.751629] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:02.176 [2024-11-04 16:24:28.751632] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:02.176 [2024-11-04 16:24:28.751635] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:02.176 [2024-11-04 16:24:28.751638] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:02.176 [2024-11-04 16:24:28.751644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:02.176 [2024-11-04 16:24:28.751652] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:02.176 [2024-11-04 16:24:28.751656] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:02.176 [2024-11-04 16:24:28.751659] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.176 [2024-11-04 16:24:28.751665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:02.176 [2024-11-04 16:24:28.751671] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:02.176 [2024-11-04 16:24:28.751675] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.176 [2024-11-04 16:24:28.751678] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.176 [2024-11-04 16:24:28.751683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.176 [2024-11-04 16:24:28.751692] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:02.176 [2024-11-04 16:24:28.751696] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:02.176 [2024-11-04 16:24:28.751699] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.176 [2024-11-04 16:24:28.751704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:02.176 [2024-11-04 16:24:28.759608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:02.176 [2024-11-04 16:24:28.759622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:02.176 [2024-11-04 16:24:28.759632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:02.176 [2024-11-04 16:24:28.759638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:02.176 ===================================================== 00:13:02.176 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:02.176 ===================================================== 00:13:02.176 Controller Capabilities/Features 00:13:02.177 ================================ 00:13:02.177 Vendor ID: 4e58 00:13:02.177 Subsystem Vendor ID: 4e58 00:13:02.177 Serial Number: SPDK2 00:13:02.177 Model Number: SPDK bdev Controller 00:13:02.177 Firmware Version: 25.01 00:13:02.177 Recommended Arb Burst: 6 00:13:02.177 IEEE OUI Identifier: 8d 6b 50 00:13:02.177 Multi-path I/O 00:13:02.177 May have multiple subsystem ports: Yes 00:13:02.177 May have multiple controllers: Yes 00:13:02.177 Associated with SR-IOV VF: No 00:13:02.177 Max Data Transfer Size: 131072 00:13:02.177 Max Number of Namespaces: 32 00:13:02.177 Max Number of I/O Queues: 127 00:13:02.177 NVMe Specification Version (VS): 1.3 00:13:02.177 NVMe Specification Version (Identify): 1.3 00:13:02.177 Maximum Queue Entries: 256 00:13:02.177 Contiguous Queues Required: Yes 00:13:02.177 Arbitration Mechanisms Supported 00:13:02.177 Weighted Round Robin: Not Supported 00:13:02.177 Vendor Specific: Not Supported 00:13:02.177 Reset Timeout: 15000 ms 00:13:02.177 Doorbell Stride: 4 bytes 00:13:02.177 NVM Subsystem Reset: Not Supported 00:13:02.177 Command Sets Supported 00:13:02.177 NVM Command Set: Supported 00:13:02.177 Boot Partition: Not Supported 00:13:02.177 Memory Page Size Minimum: 4096 bytes 00:13:02.177 Memory Page Size Maximum: 4096 bytes 00:13:02.177 Persistent Memory Region: Not Supported 00:13:02.177 Optional Asynchronous Events Supported 00:13:02.177 Namespace Attribute Notices: Supported 00:13:02.177 Firmware Activation Notices: Not Supported 00:13:02.177 ANA Change Notices: Not Supported 00:13:02.177 PLE Aggregate Log Change Notices: Not Supported 00:13:02.177 LBA Status Info Alert Notices: Not Supported 00:13:02.177 EGE Aggregate Log Change Notices: Not Supported 00:13:02.177 Normal NVM Subsystem Shutdown event: Not Supported 00:13:02.177 Zone Descriptor Change Notices: Not Supported 00:13:02.177 Discovery Log Change Notices: Not Supported 00:13:02.177 Controller Attributes 00:13:02.177 128-bit Host Identifier: Supported 00:13:02.177 Non-Operational Permissive Mode: Not Supported 00:13:02.177 NVM Sets: Not Supported 00:13:02.177 Read Recovery Levels: Not Supported 00:13:02.177 Endurance Groups: Not Supported 00:13:02.177 Predictable Latency Mode: Not Supported 00:13:02.177 Traffic Based Keep ALive: Not Supported 00:13:02.177 Namespace Granularity: Not Supported 00:13:02.177 SQ Associations: Not Supported 00:13:02.177 UUID List: Not Supported 00:13:02.177 Multi-Domain Subsystem: Not Supported 00:13:02.177 Fixed Capacity Management: Not Supported 00:13:02.177 Variable Capacity Management: Not Supported 00:13:02.177 Delete Endurance Group: Not Supported 00:13:02.177 Delete NVM Set: Not Supported 00:13:02.177 Extended LBA Formats Supported: Not Supported 00:13:02.177 Flexible Data Placement Supported: Not Supported 00:13:02.177 00:13:02.177 Controller Memory Buffer Support 00:13:02.177 ================================ 00:13:02.177 Supported: No 00:13:02.177 00:13:02.177 Persistent Memory Region Support 00:13:02.177 ================================ 00:13:02.177 Supported: No 00:13:02.177 00:13:02.177 Admin Command Set Attributes 00:13:02.177 ============================ 00:13:02.177 Security Send/Receive: Not Supported 00:13:02.177 Format NVM: Not Supported 00:13:02.177 Firmware Activate/Download: Not Supported 00:13:02.177 Namespace Management: Not Supported 00:13:02.177 Device Self-Test: Not Supported 00:13:02.177 Directives: Not Supported 00:13:02.177 NVMe-MI: Not Supported 00:13:02.177 Virtualization Management: Not Supported 00:13:02.177 Doorbell Buffer Config: Not Supported 00:13:02.177 Get LBA Status Capability: Not Supported 00:13:02.177 Command & Feature Lockdown Capability: Not Supported 00:13:02.177 Abort Command Limit: 4 00:13:02.177 Async Event Request Limit: 4 00:13:02.177 Number of Firmware Slots: N/A 00:13:02.177 Firmware Slot 1 Read-Only: N/A 00:13:02.177 Firmware Activation Without Reset: N/A 00:13:02.177 Multiple Update Detection Support: N/A 00:13:02.177 Firmware Update Granularity: No Information Provided 00:13:02.177 Per-Namespace SMART Log: No 00:13:02.177 Asymmetric Namespace Access Log Page: Not Supported 00:13:02.177 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:02.177 Command Effects Log Page: Supported 00:13:02.177 Get Log Page Extended Data: Supported 00:13:02.177 Telemetry Log Pages: Not Supported 00:13:02.177 Persistent Event Log Pages: Not Supported 00:13:02.177 Supported Log Pages Log Page: May Support 00:13:02.177 Commands Supported & Effects Log Page: Not Supported 00:13:02.177 Feature Identifiers & Effects Log Page:May Support 00:13:02.177 NVMe-MI Commands & Effects Log Page: May Support 00:13:02.177 Data Area 4 for Telemetry Log: Not Supported 00:13:02.177 Error Log Page Entries Supported: 128 00:13:02.177 Keep Alive: Supported 00:13:02.177 Keep Alive Granularity: 10000 ms 00:13:02.177 00:13:02.177 NVM Command Set Attributes 00:13:02.177 ========================== 00:13:02.177 Submission Queue Entry Size 00:13:02.177 Max: 64 00:13:02.177 Min: 64 00:13:02.177 Completion Queue Entry Size 00:13:02.177 Max: 16 00:13:02.177 Min: 16 00:13:02.177 Number of Namespaces: 32 00:13:02.177 Compare Command: Supported 00:13:02.177 Write Uncorrectable Command: Not Supported 00:13:02.177 Dataset Management Command: Supported 00:13:02.177 Write Zeroes Command: Supported 00:13:02.177 Set Features Save Field: Not Supported 00:13:02.177 Reservations: Not Supported 00:13:02.177 Timestamp: Not Supported 00:13:02.177 Copy: Supported 00:13:02.177 Volatile Write Cache: Present 00:13:02.177 Atomic Write Unit (Normal): 1 00:13:02.177 Atomic Write Unit (PFail): 1 00:13:02.177 Atomic Compare & Write Unit: 1 00:13:02.177 Fused Compare & Write: Supported 00:13:02.177 Scatter-Gather List 00:13:02.177 SGL Command Set: Supported (Dword aligned) 00:13:02.177 SGL Keyed: Not Supported 00:13:02.177 SGL Bit Bucket Descriptor: Not Supported 00:13:02.177 SGL Metadata Pointer: Not Supported 00:13:02.177 Oversized SGL: Not Supported 00:13:02.177 SGL Metadata Address: Not Supported 00:13:02.177 SGL Offset: Not Supported 00:13:02.177 Transport SGL Data Block: Not Supported 00:13:02.177 Replay Protected Memory Block: Not Supported 00:13:02.177 00:13:02.177 Firmware Slot Information 00:13:02.177 ========================= 00:13:02.177 Active slot: 1 00:13:02.177 Slot 1 Firmware Revision: 25.01 00:13:02.177 00:13:02.177 00:13:02.177 Commands Supported and Effects 00:13:02.177 ============================== 00:13:02.177 Admin Commands 00:13:02.177 -------------- 00:13:02.177 Get Log Page (02h): Supported 00:13:02.177 Identify (06h): Supported 00:13:02.177 Abort (08h): Supported 00:13:02.177 Set Features (09h): Supported 00:13:02.177 Get Features (0Ah): Supported 00:13:02.177 Asynchronous Event Request (0Ch): Supported 00:13:02.177 Keep Alive (18h): Supported 00:13:02.177 I/O Commands 00:13:02.177 ------------ 00:13:02.177 Flush (00h): Supported LBA-Change 00:13:02.177 Write (01h): Supported LBA-Change 00:13:02.177 Read (02h): Supported 00:13:02.177 Compare (05h): Supported 00:13:02.177 Write Zeroes (08h): Supported LBA-Change 00:13:02.177 Dataset Management (09h): Supported LBA-Change 00:13:02.177 Copy (19h): Supported LBA-Change 00:13:02.177 00:13:02.177 Error Log 00:13:02.177 ========= 00:13:02.177 00:13:02.177 Arbitration 00:13:02.177 =========== 00:13:02.177 Arbitration Burst: 1 00:13:02.177 00:13:02.177 Power Management 00:13:02.177 ================ 00:13:02.177 Number of Power States: 1 00:13:02.177 Current Power State: Power State #0 00:13:02.177 Power State #0: 00:13:02.177 Max Power: 0.00 W 00:13:02.177 Non-Operational State: Operational 00:13:02.177 Entry Latency: Not Reported 00:13:02.177 Exit Latency: Not Reported 00:13:02.177 Relative Read Throughput: 0 00:13:02.177 Relative Read Latency: 0 00:13:02.177 Relative Write Throughput: 0 00:13:02.177 Relative Write Latency: 0 00:13:02.177 Idle Power: Not Reported 00:13:02.177 Active Power: Not Reported 00:13:02.177 Non-Operational Permissive Mode: Not Supported 00:13:02.177 00:13:02.177 Health Information 00:13:02.177 ================== 00:13:02.177 Critical Warnings: 00:13:02.177 Available Spare Space: OK 00:13:02.177 Temperature: OK 00:13:02.177 Device Reliability: OK 00:13:02.177 Read Only: No 00:13:02.177 Volatile Memory Backup: OK 00:13:02.177 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:02.178 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:02.178 Available Spare: 0% 00:13:02.178 Available Sp[2024-11-04 16:24:28.759722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:02.178 [2024-11-04 16:24:28.767607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:02.178 [2024-11-04 16:24:28.767635] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:02.178 [2024-11-04 16:24:28.767644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.178 [2024-11-04 16:24:28.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.178 [2024-11-04 16:24:28.767656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.178 [2024-11-04 16:24:28.767661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.178 [2024-11-04 16:24:28.767710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:02.178 [2024-11-04 16:24:28.767721] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:02.178 [2024-11-04 16:24:28.768717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.178 [2024-11-04 16:24:28.768759] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:02.178 [2024-11-04 16:24:28.768767] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:02.178 [2024-11-04 16:24:28.769719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:02.178 [2024-11-04 16:24:28.769731] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:02.178 [2024-11-04 16:24:28.769777] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:02.178 [2024-11-04 16:24:28.770742] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.178 are Threshold: 0% 00:13:02.178 Life Percentage Used: 0% 00:13:02.178 Data Units Read: 0 00:13:02.178 Data Units Written: 0 00:13:02.178 Host Read Commands: 0 00:13:02.178 Host Write Commands: 0 00:13:02.178 Controller Busy Time: 0 minutes 00:13:02.178 Power Cycles: 0 00:13:02.178 Power On Hours: 0 hours 00:13:02.178 Unsafe Shutdowns: 0 00:13:02.178 Unrecoverable Media Errors: 0 00:13:02.178 Lifetime Error Log Entries: 0 00:13:02.178 Warning Temperature Time: 0 minutes 00:13:02.178 Critical Temperature Time: 0 minutes 00:13:02.178 00:13:02.178 Number of Queues 00:13:02.178 ================ 00:13:02.178 Number of I/O Submission Queues: 127 00:13:02.178 Number of I/O Completion Queues: 127 00:13:02.178 00:13:02.178 Active Namespaces 00:13:02.178 ================= 00:13:02.178 Namespace ID:1 00:13:02.178 Error Recovery Timeout: Unlimited 00:13:02.178 Command Set Identifier: NVM (00h) 00:13:02.178 Deallocate: Supported 00:13:02.178 Deallocated/Unwritten Error: Not Supported 00:13:02.178 Deallocated Read Value: Unknown 00:13:02.178 Deallocate in Write Zeroes: Not Supported 00:13:02.178 Deallocated Guard Field: 0xFFFF 00:13:02.178 Flush: Supported 00:13:02.178 Reservation: Supported 00:13:02.178 Namespace Sharing Capabilities: Multiple Controllers 00:13:02.178 Size (in LBAs): 131072 (0GiB) 00:13:02.178 Capacity (in LBAs): 131072 (0GiB) 00:13:02.178 Utilization (in LBAs): 131072 (0GiB) 00:13:02.178 NGUID: 73B658BF50FE484994C5F40B4281BCED 00:13:02.178 UUID: 73b658bf-50fe-4849-94c5-f40b4281bced 00:13:02.178 Thin Provisioning: Not Supported 00:13:02.178 Per-NS Atomic Units: Yes 00:13:02.178 Atomic Boundary Size (Normal): 0 00:13:02.178 Atomic Boundary Size (PFail): 0 00:13:02.178 Atomic Boundary Offset: 0 00:13:02.178 Maximum Single Source Range Length: 65535 00:13:02.178 Maximum Copy Length: 65535 00:13:02.178 Maximum Source Range Count: 1 00:13:02.178 NGUID/EUI64 Never Reused: No 00:13:02.178 Namespace Write Protected: No 00:13:02.178 Number of LBA Formats: 1 00:13:02.178 Current LBA Format: LBA Format #00 00:13:02.178 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:02.178 00:13:02.178 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:02.436 [2024-11-04 16:24:29.003992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.705 Initializing NVMe Controllers 00:13:07.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:07.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:07.705 Initialization complete. Launching workers. 00:13:07.705 ======================================================== 00:13:07.705 Latency(us) 00:13:07.705 Device Information : IOPS MiB/s Average min max 00:13:07.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.16 156.02 3205.21 938.66 7450.95 00:13:07.705 ======================================================== 00:13:07.705 Total : 39940.16 156.02 3205.21 938.66 7450.95 00:13:07.705 00:13:07.705 [2024-11-04 16:24:34.105862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.705 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:07.705 [2024-11-04 16:24:34.341548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:12.971 Initializing NVMe Controllers 00:13:12.971 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:12.971 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:12.971 Initialization complete. Launching workers. 00:13:12.971 ======================================================== 00:13:12.971 Latency(us) 00:13:12.971 Device Information : IOPS MiB/s Average min max 00:13:12.971 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.56 156.08 3203.30 955.31 9353.00 00:13:12.971 ======================================================== 00:13:12.971 Total : 39956.56 156.08 3203.30 955.31 9353.00 00:13:12.971 00:13:12.971 [2024-11-04 16:24:39.361221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:12.971 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:12.971 [2024-11-04 16:24:39.572440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:18.250 [2024-11-04 16:24:44.715693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:18.250 Initializing NVMe Controllers 00:13:18.250 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:18.250 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:18.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:18.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:18.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:18.250 Initialization complete. Launching workers. 00:13:18.250 Starting thread on core 2 00:13:18.250 Starting thread on core 3 00:13:18.250 Starting thread on core 1 00:13:18.250 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:18.250 [2024-11-04 16:24:45.014093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.540 [2024-11-04 16:24:48.077332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.540 Initializing NVMe Controllers 00:13:21.540 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.540 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.540 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:21.540 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:21.540 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:21.540 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:21.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:21.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:21.540 Initialization complete. Launching workers. 00:13:21.540 Starting thread on core 1 with urgent priority queue 00:13:21.540 Starting thread on core 2 with urgent priority queue 00:13:21.540 Starting thread on core 3 with urgent priority queue 00:13:21.540 Starting thread on core 0 with urgent priority queue 00:13:21.540 SPDK bdev Controller (SPDK2 ) core 0: 7630.67 IO/s 13.11 secs/100000 ios 00:13:21.540 SPDK bdev Controller (SPDK2 ) core 1: 8948.67 IO/s 11.17 secs/100000 ios 00:13:21.540 SPDK bdev Controller (SPDK2 ) core 2: 9783.67 IO/s 10.22 secs/100000 ios 00:13:21.540 SPDK bdev Controller (SPDK2 ) core 3: 7458.00 IO/s 13.41 secs/100000 ios 00:13:21.540 ======================================================== 00:13:21.540 00:13:21.540 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:21.799 [2024-11-04 16:24:48.371059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.799 Initializing NVMe Controllers 00:13:21.799 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.799 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.799 Namespace ID: 1 size: 0GB 00:13:21.799 Initialization complete. 00:13:21.799 INFO: using host memory buffer for IO 00:13:21.799 Hello world! 00:13:21.799 [2024-11-04 16:24:48.381126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.799 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:22.057 [2024-11-04 16:24:48.661945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.993 Initializing NVMe Controllers 00:13:22.993 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.993 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.993 Initialization complete. Launching workers. 00:13:22.993 submit (in ns) avg, min, max = 5831.0, 3145.7, 3998444.8 00:13:22.993 complete (in ns) avg, min, max = 20549.7, 1717.1, 3999791.4 00:13:22.993 00:13:22.993 Submit histogram 00:13:22.993 ================ 00:13:22.993 Range in us Cumulative Count 00:13:22.993 3.139 - 3.154: 0.0122% ( 2) 00:13:22.993 3.154 - 3.170: 0.0183% ( 1) 00:13:22.993 3.170 - 3.185: 0.0426% ( 4) 00:13:22.993 3.185 - 3.200: 0.1826% ( 23) 00:13:22.993 3.200 - 3.215: 0.6818% ( 82) 00:13:22.993 3.215 - 3.230: 1.6436% ( 158) 00:13:22.993 3.230 - 3.246: 3.2629% ( 266) 00:13:22.993 3.246 - 3.261: 7.4024% ( 680) 00:13:22.993 3.261 - 3.276: 13.3013% ( 969) 00:13:22.993 3.276 - 3.291: 18.9992% ( 936) 00:13:22.993 3.291 - 3.307: 25.7929% ( 1116) 00:13:22.993 3.307 - 3.322: 32.2274% ( 1057) 00:13:22.993 3.322 - 3.337: 37.3592% ( 843) 00:13:22.993 3.337 - 3.352: 43.0815% ( 940) 00:13:22.993 3.352 - 3.368: 49.0412% ( 979) 00:13:22.993 3.368 - 3.383: 54.3617% ( 874) 00:13:22.993 3.383 - 3.398: 59.1039% ( 779) 00:13:22.993 3.398 - 3.413: 66.3907% ( 1197) 00:13:22.993 3.413 - 3.429: 72.3565% ( 980) 00:13:22.993 3.429 - 3.444: 76.8674% ( 741) 00:13:22.993 3.444 - 3.459: 81.5304% ( 766) 00:13:22.993 3.459 - 3.474: 84.2150% ( 441) 00:13:22.993 3.474 - 3.490: 86.1265% ( 314) 00:13:22.993 3.490 - 3.505: 87.2649% ( 187) 00:13:22.993 3.505 - 3.520: 87.8249% ( 92) 00:13:22.993 3.520 - 3.535: 88.3119% ( 80) 00:13:22.993 3.535 - 3.550: 88.8294% ( 85) 00:13:22.993 3.550 - 3.566: 89.5781% ( 123) 00:13:22.993 3.566 - 3.581: 90.3269% ( 123) 00:13:22.993 3.581 - 3.596: 91.3070% ( 161) 00:13:22.993 3.596 - 3.611: 92.2323% ( 152) 00:13:22.993 3.611 - 3.627: 93.3463% ( 183) 00:13:22.993 3.627 - 3.642: 94.2777% ( 153) 00:13:22.993 3.642 - 3.657: 95.1056% ( 136) 00:13:22.993 3.657 - 3.672: 95.9213% ( 134) 00:13:22.993 3.672 - 3.688: 96.7127% ( 130) 00:13:22.993 3.688 - 3.703: 97.3824% ( 110) 00:13:22.993 3.703 - 3.718: 97.9668% ( 96) 00:13:22.993 3.718 - 3.733: 98.3990% ( 71) 00:13:22.993 3.733 - 3.749: 98.7399% ( 56) 00:13:22.993 3.749 - 3.764: 99.0625% ( 53) 00:13:22.993 3.764 - 3.779: 99.2695% ( 34) 00:13:22.993 3.779 - 3.794: 99.4521% ( 30) 00:13:22.993 3.794 - 3.810: 99.5434% ( 15) 00:13:22.993 3.810 - 3.825: 99.5800% ( 6) 00:13:22.993 3.825 - 3.840: 99.6104% ( 5) 00:13:22.993 3.840 - 3.855: 99.6347% ( 4) 00:13:22.993 3.855 - 3.870: 99.6408% ( 1) 00:13:22.993 3.870 - 3.886: 99.6469% ( 1) 00:13:22.993 4.145 - 4.175: 99.6530% ( 1) 00:13:22.993 5.090 - 5.120: 99.6591% ( 1) 00:13:22.993 5.120 - 5.150: 99.6652% ( 1) 00:13:22.993 5.181 - 5.211: 99.6713% ( 1) 00:13:22.993 5.211 - 5.242: 99.6774% ( 1) 00:13:22.993 5.242 - 5.272: 99.6834% ( 1) 00:13:22.993 5.272 - 5.303: 99.6895% ( 1) 00:13:22.994 5.303 - 5.333: 99.6956% ( 1) 00:13:22.994 5.333 - 5.364: 99.7017% ( 1) 00:13:22.994 5.425 - 5.455: 99.7078% ( 1) 00:13:22.994 5.486 - 5.516: 99.7139% ( 1) 00:13:22.994 5.516 - 5.547: 99.7261% ( 2) 00:13:22.994 5.699 - 5.730: 99.7321% ( 1) 00:13:22.994 5.790 - 5.821: 99.7382% ( 1) 00:13:22.994 5.882 - 5.912: 99.7443% ( 1) 00:13:22.994 5.912 - 5.943: 99.7565% ( 2) 00:13:22.994 5.973 - 6.004: 99.7687% ( 2) 00:13:22.994 6.034 - 6.065: 99.7748% ( 1) 00:13:22.994 6.065 - 6.095: 99.7869% ( 2) 00:13:22.994 6.156 - 6.187: 99.7991% ( 2) 00:13:22.994 6.339 - 6.370: 99.8113% ( 2) 00:13:22.994 6.461 - 6.491: 99.8174% ( 1) 00:13:22.994 6.522 - 6.552: 99.8235% ( 1) 00:13:22.994 6.613 - 6.644: 99.8356% ( 2) 00:13:22.994 6.705 - 6.735: 99.8417% ( 1) 00:13:22.994 6.735 - 6.766: 99.8478% ( 1) 00:13:22.994 6.857 - 6.888: 99.8539% ( 1) 00:13:22.994 7.010 - 7.040: 99.8600% ( 1) 00:13:22.994 7.101 - 7.131: 99.8661% ( 1) 00:13:22.994 7.162 - 7.192: 99.8722% ( 1) 00:13:22.994 [2024-11-04 16:24:49.763577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.994 7.223 - 7.253: 99.8782% ( 1) 00:13:22.994 7.253 - 7.284: 99.8843% ( 1) 00:13:22.994 7.314 - 7.345: 99.8904% ( 1) 00:13:22.994 7.771 - 7.802: 99.8965% ( 1) 00:13:22.994 8.046 - 8.107: 99.9026% ( 1) 00:13:22.994 8.229 - 8.290: 99.9087% ( 1) 00:13:22.994 8.960 - 9.021: 99.9148% ( 1) 00:13:22.994 9.630 - 9.691: 99.9209% ( 1) 00:13:22.994 13.531 - 13.592: 99.9269% ( 1) 00:13:22.994 14.629 - 14.690: 99.9330% ( 1) 00:13:22.994 16.701 - 16.823: 99.9391% ( 1) 00:13:22.994 3994.575 - 4025.783: 100.0000% ( 10) 00:13:22.994 00:13:22.994 Complete histogram 00:13:22.994 ================== 00:13:22.994 Range in us Cumulative Count 00:13:22.994 1.714 - 1.722: 0.0183% ( 3) 00:13:22.994 1.722 - 1.730: 0.0244% ( 1) 00:13:22.994 1.730 - 1.737: 0.0913% ( 11) 00:13:22.994 1.737 - 1.745: 0.1096% ( 3) 00:13:22.994 1.745 - 1.752: 0.1218% ( 2) 00:13:22.994 1.752 - 1.760: 0.1400% ( 3) 00:13:22.994 1.760 - 1.768: 0.4687% ( 54) 00:13:22.994 1.768 - 1.775: 4.4317% ( 651) 00:13:22.994 1.775 - 1.783: 12.4734% ( 1321) 00:13:22.994 1.783 - 1.790: 17.5138% ( 828) 00:13:22.994 1.790 - 1.798: 19.4132% ( 312) 00:13:22.994 1.798 - 1.806: 20.8437% ( 235) 00:13:22.994 1.806 - 1.813: 22.8465% ( 329) 00:13:22.994 1.813 - 1.821: 34.9425% ( 1987) 00:13:22.994 1.821 - 1.829: 63.9313% ( 4762) 00:13:22.994 1.829 - 1.836: 85.2864% ( 3508) 00:13:22.994 1.836 - 1.844: 92.4028% ( 1169) 00:13:22.994 1.844 - 1.851: 95.0569% ( 436) 00:13:22.994 1.851 - 1.859: 96.6884% ( 268) 00:13:22.994 1.859 - 1.867: 97.5224% ( 137) 00:13:22.994 1.867 - 1.874: 97.8572% ( 55) 00:13:22.994 1.874 - 1.882: 98.0946% ( 39) 00:13:22.994 1.882 - 1.890: 98.2955% ( 33) 00:13:22.994 1.890 - 1.897: 98.5633% ( 44) 00:13:22.994 1.897 - 1.905: 98.8312% ( 44) 00:13:22.994 1.905 - 1.912: 99.0199% ( 31) 00:13:22.994 1.912 - 1.920: 99.1660% ( 24) 00:13:22.994 1.920 - 1.928: 99.2451% ( 13) 00:13:22.994 1.928 - 1.935: 99.2878% ( 7) 00:13:22.994 1.935 - 1.943: 99.3060% ( 3) 00:13:22.994 1.943 - 1.950: 99.3182% ( 2) 00:13:22.994 1.950 - 1.966: 99.3243% ( 1) 00:13:22.994 1.966 - 1.981: 99.3365% ( 2) 00:13:22.994 2.027 - 2.042: 99.3425% ( 1) 00:13:22.994 2.133 - 2.149: 99.3486% ( 1) 00:13:22.994 2.149 - 2.164: 99.3547% ( 1) 00:13:22.994 2.240 - 2.255: 99.3608% ( 1) 00:13:22.994 2.377 - 2.392: 99.3669% ( 1) 00:13:22.994 3.779 - 3.794: 99.3730% ( 1) 00:13:22.994 4.084 - 4.114: 99.3791% ( 1) 00:13:22.994 4.206 - 4.236: 99.3852% ( 1) 00:13:22.994 4.389 - 4.419: 99.3912% ( 1) 00:13:22.994 4.450 - 4.480: 99.3973% ( 1) 00:13:22.994 4.602 - 4.632: 99.4034% ( 1) 00:13:22.994 4.937 - 4.968: 99.4095% ( 1) 00:13:22.994 5.029 - 5.059: 99.4156% ( 1) 00:13:22.994 5.303 - 5.333: 99.4339% ( 3) 00:13:22.994 5.333 - 5.364: 99.4399% ( 1) 00:13:22.994 5.394 - 5.425: 99.4460% ( 1) 00:13:22.994 5.486 - 5.516: 99.4521% ( 1) 00:13:22.994 5.547 - 5.577: 99.4582% ( 1) 00:13:22.994 5.882 - 5.912: 99.4643% ( 1) 00:13:22.994 6.095 - 6.126: 99.4826% ( 3) 00:13:22.994 6.430 - 6.461: 99.4886% ( 1) 00:13:22.994 6.491 - 6.522: 99.4947% ( 1) 00:13:22.994 6.522 - 6.552: 99.5008% ( 1) 00:13:22.994 6.583 - 6.613: 99.5069% ( 1) 00:13:22.994 6.674 - 6.705: 99.5191% ( 2) 00:13:22.994 7.101 - 7.131: 99.5252% ( 1) 00:13:22.994 8.046 - 8.107: 99.5313% ( 1) 00:13:22.994 3994.575 - 4025.783: 100.0000% ( 77) 00:13:22.994 00:13:22.994 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:22.994 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:22.994 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:22.994 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:22.994 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.254 [ 00:13:23.254 { 00:13:23.254 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.254 "subtype": "Discovery", 00:13:23.254 "listen_addresses": [], 00:13:23.254 "allow_any_host": true, 00:13:23.254 "hosts": [] 00:13:23.254 }, 00:13:23.254 { 00:13:23.254 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.254 "subtype": "NVMe", 00:13:23.254 "listen_addresses": [ 00:13:23.254 { 00:13:23.254 "trtype": "VFIOUSER", 00:13:23.254 "adrfam": "IPv4", 00:13:23.254 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.254 "trsvcid": "0" 00:13:23.254 } 00:13:23.254 ], 00:13:23.254 "allow_any_host": true, 00:13:23.254 "hosts": [], 00:13:23.254 "serial_number": "SPDK1", 00:13:23.254 "model_number": "SPDK bdev Controller", 00:13:23.254 "max_namespaces": 32, 00:13:23.254 "min_cntlid": 1, 00:13:23.254 "max_cntlid": 65519, 00:13:23.254 "namespaces": [ 00:13:23.254 { 00:13:23.254 "nsid": 1, 00:13:23.254 "bdev_name": "Malloc1", 00:13:23.254 "name": "Malloc1", 00:13:23.254 "nguid": "641265CB68C045379376B8CA2DE0E883", 00:13:23.254 "uuid": "641265cb-68c0-4537-9376-b8ca2de0e883" 00:13:23.254 }, 00:13:23.254 { 00:13:23.254 "nsid": 2, 00:13:23.254 "bdev_name": "Malloc3", 00:13:23.254 "name": "Malloc3", 00:13:23.254 "nguid": "7C8974F965124A33AAA1E01C0A341E46", 00:13:23.254 "uuid": "7c8974f9-6512-4a33-aaa1-e01c0a341e46" 00:13:23.254 } 00:13:23.254 ] 00:13:23.254 }, 00:13:23.254 { 00:13:23.254 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.254 "subtype": "NVMe", 00:13:23.254 "listen_addresses": [ 00:13:23.254 { 00:13:23.254 "trtype": "VFIOUSER", 00:13:23.254 "adrfam": "IPv4", 00:13:23.254 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.254 "trsvcid": "0" 00:13:23.254 } 00:13:23.254 ], 00:13:23.254 "allow_any_host": true, 00:13:23.254 "hosts": [], 00:13:23.254 "serial_number": "SPDK2", 00:13:23.254 "model_number": "SPDK bdev Controller", 00:13:23.254 "max_namespaces": 32, 00:13:23.254 "min_cntlid": 1, 00:13:23.254 "max_cntlid": 65519, 00:13:23.254 "namespaces": [ 00:13:23.254 { 00:13:23.254 "nsid": 1, 00:13:23.254 "bdev_name": "Malloc2", 00:13:23.254 "name": "Malloc2", 00:13:23.254 "nguid": "73B658BF50FE484994C5F40B4281BCED", 00:13:23.254 "uuid": "73b658bf-50fe-4849-94c5-f40b4281bced" 00:13:23.254 } 00:13:23.254 ] 00:13:23.254 } 00:13:23.254 ] 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2782956 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:23.254 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:23.513 [2024-11-04 16:24:50.176031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.513 Malloc4 00:13:23.513 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:23.772 [2024-11-04 16:24:50.425868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.772 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.772 Asynchronous Event Request test 00:13:23.772 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.772 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.772 Registering asynchronous event callbacks... 00:13:23.772 Starting namespace attribute notice tests for all controllers... 00:13:23.772 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:23.772 aer_cb - Changed Namespace 00:13:23.772 Cleaning up... 00:13:24.031 [ 00:13:24.031 { 00:13:24.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.031 "subtype": "Discovery", 00:13:24.031 "listen_addresses": [], 00:13:24.031 "allow_any_host": true, 00:13:24.031 "hosts": [] 00:13:24.031 }, 00:13:24.031 { 00:13:24.031 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.031 "subtype": "NVMe", 00:13:24.031 "listen_addresses": [ 00:13:24.031 { 00:13:24.031 "trtype": "VFIOUSER", 00:13:24.031 "adrfam": "IPv4", 00:13:24.031 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.031 "trsvcid": "0" 00:13:24.031 } 00:13:24.031 ], 00:13:24.031 "allow_any_host": true, 00:13:24.031 "hosts": [], 00:13:24.031 "serial_number": "SPDK1", 00:13:24.031 "model_number": "SPDK bdev Controller", 00:13:24.031 "max_namespaces": 32, 00:13:24.031 "min_cntlid": 1, 00:13:24.031 "max_cntlid": 65519, 00:13:24.031 "namespaces": [ 00:13:24.031 { 00:13:24.031 "nsid": 1, 00:13:24.031 "bdev_name": "Malloc1", 00:13:24.031 "name": "Malloc1", 00:13:24.031 "nguid": "641265CB68C045379376B8CA2DE0E883", 00:13:24.031 "uuid": "641265cb-68c0-4537-9376-b8ca2de0e883" 00:13:24.031 }, 00:13:24.031 { 00:13:24.031 "nsid": 2, 00:13:24.031 "bdev_name": "Malloc3", 00:13:24.031 "name": "Malloc3", 00:13:24.031 "nguid": "7C8974F965124A33AAA1E01C0A341E46", 00:13:24.031 "uuid": "7c8974f9-6512-4a33-aaa1-e01c0a341e46" 00:13:24.031 } 00:13:24.031 ] 00:13:24.031 }, 00:13:24.031 { 00:13:24.031 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.031 "subtype": "NVMe", 00:13:24.031 "listen_addresses": [ 00:13:24.031 { 00:13:24.031 "trtype": "VFIOUSER", 00:13:24.031 "adrfam": "IPv4", 00:13:24.031 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.031 "trsvcid": "0" 00:13:24.031 } 00:13:24.031 ], 00:13:24.031 "allow_any_host": true, 00:13:24.031 "hosts": [], 00:13:24.031 "serial_number": "SPDK2", 00:13:24.031 "model_number": "SPDK bdev Controller", 00:13:24.031 "max_namespaces": 32, 00:13:24.031 "min_cntlid": 1, 00:13:24.031 "max_cntlid": 65519, 00:13:24.031 "namespaces": [ 00:13:24.031 { 00:13:24.031 "nsid": 1, 00:13:24.031 "bdev_name": "Malloc2", 00:13:24.031 "name": "Malloc2", 00:13:24.031 "nguid": "73B658BF50FE484994C5F40B4281BCED", 00:13:24.031 "uuid": "73b658bf-50fe-4849-94c5-f40b4281bced" 00:13:24.031 }, 00:13:24.031 { 00:13:24.031 "nsid": 2, 00:13:24.031 "bdev_name": "Malloc4", 00:13:24.031 "name": "Malloc4", 00:13:24.031 "nguid": "4FA2D67B90C54137B7C9F3A74FA6C309", 00:13:24.031 "uuid": "4fa2d67b-90c5-4137-b7c9-f3a74fa6c309" 00:13:24.031 } 00:13:24.031 ] 00:13:24.031 } 00:13:24.031 ] 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2782956 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2774895 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2774895 ']' 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2774895 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774895 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774895' 00:13:24.031 killing process with pid 2774895 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2774895 00:13:24.031 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2774895 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2783144 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2783144' 00:13:24.291 Process pid: 2783144 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2783144 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2783144 ']' 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.291 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:24.291 [2024-11-04 16:24:50.986486] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:24.291 [2024-11-04 16:24:50.987351] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:13:24.291 [2024-11-04 16:24:50.987390] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.291 [2024-11-04 16:24:51.053403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.291 [2024-11-04 16:24:51.095034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.291 [2024-11-04 16:24:51.095074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.291 [2024-11-04 16:24:51.095081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.291 [2024-11-04 16:24:51.095087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.291 [2024-11-04 16:24:51.095092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.291 [2024-11-04 16:24:51.096627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.291 [2024-11-04 16:24:51.096679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.291 [2024-11-04 16:24:51.096697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.291 [2024-11-04 16:24:51.096701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.550 [2024-11-04 16:24:51.164094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:24.550 [2024-11-04 16:24:51.164249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:24.550 [2024-11-04 16:24:51.164311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:24.550 [2024-11-04 16:24:51.164635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:24.550 [2024-11-04 16:24:51.164811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:24.550 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.550 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:24.550 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.490 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.808 Malloc1 00:13:25.808 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:26.100 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:26.358 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.617 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.617 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.617 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.617 Malloc2 00:13:26.617 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:26.876 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:27.134 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2783144 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2783144 ']' 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2783144 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.393 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783144 00:13:27.393 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.393 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.393 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783144' 00:13:27.393 killing process with pid 2783144 00:13:27.393 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2783144 00:13:27.393 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2783144 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.652 00:13:27.652 real 0m51.626s 00:13:27.652 user 3m19.977s 00:13:27.652 sys 0m3.181s 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 ************************************ 00:13:27.652 END TEST nvmf_vfio_user 00:13:27.652 ************************************ 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 ************************************ 00:13:27.652 START TEST nvmf_vfio_user_nvme_compliance 00:13:27.652 ************************************ 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:27.652 * Looking for test storage... 00:13:27.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.652 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.653 --rc genhtml_branch_coverage=1 00:13:27.653 --rc genhtml_function_coverage=1 00:13:27.653 --rc genhtml_legend=1 00:13:27.653 --rc geninfo_all_blocks=1 00:13:27.653 --rc geninfo_unexecuted_blocks=1 00:13:27.653 00:13:27.653 ' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.653 --rc genhtml_branch_coverage=1 00:13:27.653 --rc genhtml_function_coverage=1 00:13:27.653 --rc genhtml_legend=1 00:13:27.653 --rc geninfo_all_blocks=1 00:13:27.653 --rc geninfo_unexecuted_blocks=1 00:13:27.653 00:13:27.653 ' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.653 --rc genhtml_branch_coverage=1 00:13:27.653 --rc genhtml_function_coverage=1 00:13:27.653 --rc genhtml_legend=1 00:13:27.653 --rc geninfo_all_blocks=1 00:13:27.653 --rc geninfo_unexecuted_blocks=1 00:13:27.653 00:13:27.653 ' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.653 --rc genhtml_branch_coverage=1 00:13:27.653 --rc genhtml_function_coverage=1 00:13:27.653 --rc genhtml_legend=1 00:13:27.653 --rc geninfo_all_blocks=1 00:13:27.653 --rc geninfo_unexecuted_blocks=1 00:13:27.653 00:13:27.653 ' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.653 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2783906 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2783906' 00:13:27.913 Process pid: 2783906 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2783906 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:27.913 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2783906 ']' 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:27.914 [2024-11-04 16:24:54.541782] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:13:27.914 [2024-11-04 16:24:54.541832] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.914 [2024-11-04 16:24:54.603524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.914 [2024-11-04 16:24:54.644838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.914 [2024-11-04 16:24:54.644873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.914 [2024-11-04 16:24:54.644880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.914 [2024-11-04 16:24:54.644886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.914 [2024-11-04 16:24:54.644891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.914 [2024-11-04 16:24:54.646174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.914 [2024-11-04 16:24:54.646273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.914 [2024-11-04 16:24:54.646275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:27.914 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.290 malloc0 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:29.290 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.291 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.291 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.291 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:29.291 00:13:29.291 00:13:29.291 CUnit - A unit testing framework for C - Version 2.1-3 00:13:29.291 http://cunit.sourceforge.net/ 00:13:29.291 00:13:29.291 00:13:29.291 Suite: nvme_compliance 00:13:29.291 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-04 16:24:55.983063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.291 [2024-11-04 16:24:55.984404] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:29.291 [2024-11-04 16:24:55.984419] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:29.291 [2024-11-04 16:24:55.984425] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:29.291 [2024-11-04 16:24:55.986084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.291 passed 00:13:29.291 Test: admin_identify_ctrlr_verify_fused ...[2024-11-04 16:24:56.064608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.291 [2024-11-04 16:24:56.067639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.291 passed 00:13:29.549 Test: admin_identify_ns ...[2024-11-04 16:24:56.147322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.549 [2024-11-04 16:24:56.207621] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:29.549 [2024-11-04 16:24:56.215610] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:29.549 [2024-11-04 16:24:56.236707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.549 passed 00:13:29.549 Test: admin_get_features_mandatory_features ...[2024-11-04 16:24:56.312022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.549 [2024-11-04 16:24:56.315043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.549 passed 00:13:29.808 Test: admin_get_features_optional_features ...[2024-11-04 16:24:56.394549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.808 [2024-11-04 16:24:56.397570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.808 passed 00:13:29.808 Test: admin_set_features_number_of_queues ...[2024-11-04 16:24:56.473881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.808 [2024-11-04 16:24:56.577692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.808 passed 00:13:30.066 Test: admin_get_log_page_mandatory_logs ...[2024-11-04 16:24:56.653819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.066 [2024-11-04 16:24:56.656837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.066 passed 00:13:30.066 Test: admin_get_log_page_with_lpo ...[2024-11-04 16:24:56.732865] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.066 [2024-11-04 16:24:56.804610] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:30.066 [2024-11-04 16:24:56.817673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.066 passed 00:13:30.325 Test: fabric_property_get ...[2024-11-04 16:24:56.891366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.325 [2024-11-04 16:24:56.892605] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:30.325 [2024-11-04 16:24:56.894388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.325 passed 00:13:30.325 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-04 16:24:56.972898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.325 [2024-11-04 16:24:56.974138] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:30.325 [2024-11-04 16:24:56.975920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.325 passed 00:13:30.325 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-04 16:24:57.051693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.325 [2024-11-04 16:24:57.137611] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.584 [2024-11-04 16:24:57.153614] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.584 [2024-11-04 16:24:57.158686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.584 passed 00:13:30.584 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-04 16:24:57.236331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.584 [2024-11-04 16:24:57.237560] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:30.584 [2024-11-04 16:24:57.239348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.584 passed 00:13:30.584 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-04 16:24:57.313922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.584 [2024-11-04 16:24:57.389609] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:30.843 [2024-11-04 16:24:57.413604] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.843 [2024-11-04 16:24:57.418685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.843 passed 00:13:30.843 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-04 16:24:57.494314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.843 [2024-11-04 16:24:57.495549] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:30.843 [2024-11-04 16:24:57.495571] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:30.843 [2024-11-04 16:24:57.500349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.843 passed 00:13:30.843 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-04 16:24:57.576043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.843 [2024-11-04 16:24:57.667614] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:31.102 [2024-11-04 16:24:57.675606] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:31.102 [2024-11-04 16:24:57.683606] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:31.102 [2024-11-04 16:24:57.691609] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:31.102 [2024-11-04 16:24:57.720697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.102 passed 00:13:31.102 Test: admin_create_io_sq_verify_pc ...[2024-11-04 16:24:57.796287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.102 [2024-11-04 16:24:57.812616] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:31.102 [2024-11-04 16:24:57.830459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.102 passed 00:13:31.102 Test: admin_create_io_qp_max_qps ...[2024-11-04 16:24:57.909006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.479 [2024-11-04 16:24:59.004611] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:32.737 [2024-11-04 16:24:59.391978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.737 passed 00:13:32.737 Test: admin_create_io_sq_shared_cq ...[2024-11-04 16:24:59.467902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.995 [2024-11-04 16:24:59.600615] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:32.995 [2024-11-04 16:24:59.637668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.995 passed 00:13:32.995 00:13:32.995 Run Summary: Type Total Ran Passed Failed Inactive 00:13:32.995 suites 1 1 n/a 0 0 00:13:32.995 tests 18 18 18 0 0 00:13:32.995 asserts 360 360 360 0 n/a 00:13:32.995 00:13:32.995 Elapsed time = 1.503 seconds 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2783906 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2783906 ']' 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2783906 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783906 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783906' 00:13:32.995 killing process with pid 2783906 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2783906 00:13:32.995 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2783906 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:33.254 00:13:33.254 real 0m5.610s 00:13:33.254 user 0m15.766s 00:13:33.254 sys 0m0.504s 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.254 ************************************ 00:13:33.254 END TEST nvmf_vfio_user_nvme_compliance 00:13:33.254 ************************************ 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.254 ************************************ 00:13:33.254 START TEST nvmf_vfio_user_fuzz 00:13:33.254 ************************************ 00:13:33.254 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:33.254 * Looking for test storage... 00:13:33.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.254 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.254 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.254 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.514 --rc genhtml_branch_coverage=1 00:13:33.514 --rc genhtml_function_coverage=1 00:13:33.514 --rc genhtml_legend=1 00:13:33.514 --rc geninfo_all_blocks=1 00:13:33.514 --rc geninfo_unexecuted_blocks=1 00:13:33.514 00:13:33.514 ' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.514 --rc genhtml_branch_coverage=1 00:13:33.514 --rc genhtml_function_coverage=1 00:13:33.514 --rc genhtml_legend=1 00:13:33.514 --rc geninfo_all_blocks=1 00:13:33.514 --rc geninfo_unexecuted_blocks=1 00:13:33.514 00:13:33.514 ' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.514 --rc genhtml_branch_coverage=1 00:13:33.514 --rc genhtml_function_coverage=1 00:13:33.514 --rc genhtml_legend=1 00:13:33.514 --rc geninfo_all_blocks=1 00:13:33.514 --rc geninfo_unexecuted_blocks=1 00:13:33.514 00:13:33.514 ' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.514 --rc genhtml_branch_coverage=1 00:13:33.514 --rc genhtml_function_coverage=1 00:13:33.514 --rc genhtml_legend=1 00:13:33.514 --rc geninfo_all_blocks=1 00:13:33.514 --rc geninfo_unexecuted_blocks=1 00:13:33.514 00:13:33.514 ' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.514 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2784894 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2784894' 00:13:33.515 Process pid: 2784894 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2784894 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2784894 ']' 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.515 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:33.774 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.774 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:33.774 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.709 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.709 malloc0 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:34.710 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:06.793 Fuzzing completed. Shutting down the fuzz application 00:14:06.793 00:14:06.793 Dumping successful admin opcodes: 00:14:06.793 8, 9, 10, 24, 00:14:06.793 Dumping successful io opcodes: 00:14:06.793 0, 00:14:06.793 NS: 0x20000081ef00 I/O qp, Total commands completed: 1065221, total successful commands: 4201, random_seed: 2723004800 00:14:06.793 NS: 0x20000081ef00 admin qp, Total commands completed: 263428, total successful commands: 2119, random_seed: 3319905920 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2784894 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2784894 ']' 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2784894 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2784894 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2784894' 00:14:06.793 killing process with pid 2784894 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2784894 00:14:06.793 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2784894 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:06.793 00:14:06.793 real 0m32.191s 00:14:06.793 user 0m30.172s 00:14:06.793 sys 0m31.434s 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:06.793 ************************************ 00:14:06.793 END TEST nvmf_vfio_user_fuzz 00:14:06.793 ************************************ 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.793 ************************************ 00:14:06.793 START TEST nvmf_auth_target 00:14:06.793 ************************************ 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:06.793 * Looking for test storage... 00:14:06.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.793 --rc genhtml_branch_coverage=1 00:14:06.793 --rc genhtml_function_coverage=1 00:14:06.793 --rc genhtml_legend=1 00:14:06.793 --rc geninfo_all_blocks=1 00:14:06.793 --rc geninfo_unexecuted_blocks=1 00:14:06.793 00:14:06.793 ' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.793 --rc genhtml_branch_coverage=1 00:14:06.793 --rc genhtml_function_coverage=1 00:14:06.793 --rc genhtml_legend=1 00:14:06.793 --rc geninfo_all_blocks=1 00:14:06.793 --rc geninfo_unexecuted_blocks=1 00:14:06.793 00:14:06.793 ' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.793 --rc genhtml_branch_coverage=1 00:14:06.793 --rc genhtml_function_coverage=1 00:14:06.793 --rc genhtml_legend=1 00:14:06.793 --rc geninfo_all_blocks=1 00:14:06.793 --rc geninfo_unexecuted_blocks=1 00:14:06.793 00:14:06.793 ' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.793 --rc genhtml_branch_coverage=1 00:14:06.793 --rc genhtml_function_coverage=1 00:14:06.793 --rc genhtml_legend=1 00:14:06.793 --rc geninfo_all_blocks=1 00:14:06.793 --rc geninfo_unexecuted_blocks=1 00:14:06.793 00:14:06.793 ' 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.793 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.794 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:10.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:10.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.987 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:10.988 Found net devices under 0000:86:00.0: cvl_0_0 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:10.988 Found net devices under 0000:86:00.1: cvl_0_1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.988 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:14:11.247 00:14:11.247 --- 10.0.0.2 ping statistics --- 00:14:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.247 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:14:11.247 00:14:11.247 --- 10.0.0.1 ping statistics --- 00:14:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.247 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.247 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2793204 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2793204 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2793204 ']' 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.248 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2793231 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3790852f167e15a48c8ea227112e5d5de25fbcee43be2740 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EtN 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3790852f167e15a48c8ea227112e5d5de25fbcee43be2740 0 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3790852f167e15a48c8ea227112e5d5de25fbcee43be2740 0 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3790852f167e15a48c8ea227112e5d5de25fbcee43be2740 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EtN 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EtN 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.EtN 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=15e7b82c976395c08859514702044c24248a66d065da3153e0426fddbba16cae 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.v4S 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 15e7b82c976395c08859514702044c24248a66d065da3153e0426fddbba16cae 3 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 15e7b82c976395c08859514702044c24248a66d065da3153e0426fddbba16cae 3 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=15e7b82c976395c08859514702044c24248a66d065da3153e0426fddbba16cae 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:11.507 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.v4S 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.v4S 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.v4S 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c591827d75cd6aed08b992467e8b16ad 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zde 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c591827d75cd6aed08b992467e8b16ad 1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c591827d75cd6aed08b992467e8b16ad 1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c591827d75cd6aed08b992467e8b16ad 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zde 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zde 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Zde 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ec0016fd6f857505e8f634ce0c4fe13112ba054a75941bd3 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.S2T 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ec0016fd6f857505e8f634ce0c4fe13112ba054a75941bd3 2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ec0016fd6f857505e8f634ce0c4fe13112ba054a75941bd3 2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ec0016fd6f857505e8f634ce0c4fe13112ba054a75941bd3 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.S2T 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.S2T 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.S2T 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=75f21c134dfb8e714593ede8cb15971aeab36828960c7a5b 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jeF 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 75f21c134dfb8e714593ede8cb15971aeab36828960c7a5b 2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 75f21c134dfb8e714593ede8cb15971aeab36828960c7a5b 2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=75f21c134dfb8e714593ede8cb15971aeab36828960c7a5b 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jeF 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jeF 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.jeF 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:11.767 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=057dc2ad6d236a2cabf3022d38ce211c 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XWm 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 057dc2ad6d236a2cabf3022d38ce211c 1 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 057dc2ad6d236a2cabf3022d38ce211c 1 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=057dc2ad6d236a2cabf3022d38ce211c 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:11.768 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XWm 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XWm 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XWm 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b2038b554e99600d8371d9d2b7637629703735fe0a3997aa648365745548ab7 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vzb 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b2038b554e99600d8371d9d2b7637629703735fe0a3997aa648365745548ab7 3 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b2038b554e99600d8371d9d2b7637629703735fe0a3997aa648365745548ab7 3 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b2038b554e99600d8371d9d2b7637629703735fe0a3997aa648365745548ab7 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vzb 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vzb 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Vzb 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2793204 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2793204 ']' 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.027 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2793231 /var/tmp/host.sock 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2793231 ']' 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:12.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.286 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EtN 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.EtN 00:14:12.286 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.EtN 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.v4S ]] 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v4S 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v4S 00:14:12.545 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v4S 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zde 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Zde 00:14:12.804 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Zde 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.S2T ]] 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.S2T 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.S2T 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.S2T 00:14:13.062 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jeF 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.jeF 00:14:13.063 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.jeF 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XWm ]] 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XWm 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XWm 00:14:13.321 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XWm 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vzb 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Vzb 00:14:13.579 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Vzb 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.838 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.097 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.097 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.097 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.097 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.097 00:14:14.356 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.356 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.356 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.356 { 00:14:14.356 "cntlid": 1, 00:14:14.356 "qid": 0, 00:14:14.356 "state": "enabled", 00:14:14.356 "thread": "nvmf_tgt_poll_group_000", 00:14:14.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:14.356 "listen_address": { 00:14:14.356 "trtype": "TCP", 00:14:14.356 "adrfam": "IPv4", 00:14:14.356 "traddr": "10.0.0.2", 00:14:14.356 "trsvcid": "4420" 00:14:14.356 }, 00:14:14.356 "peer_address": { 00:14:14.356 "trtype": "TCP", 00:14:14.356 "adrfam": "IPv4", 00:14:14.356 "traddr": "10.0.0.1", 00:14:14.356 "trsvcid": "50100" 00:14:14.356 }, 00:14:14.356 "auth": { 00:14:14.356 "state": "completed", 00:14:14.356 "digest": "sha256", 00:14:14.356 "dhgroup": "null" 00:14:14.356 } 00:14:14.356 } 00:14:14.356 ]' 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.356 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.615 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.874 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:14.874 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.443 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.701 00:14:15.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.960 { 00:14:15.960 "cntlid": 3, 00:14:15.960 "qid": 0, 00:14:15.960 "state": "enabled", 00:14:15.960 "thread": "nvmf_tgt_poll_group_000", 00:14:15.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:15.960 "listen_address": { 00:14:15.960 "trtype": "TCP", 00:14:15.960 "adrfam": "IPv4", 00:14:15.960 "traddr": "10.0.0.2", 00:14:15.960 "trsvcid": "4420" 00:14:15.960 }, 00:14:15.960 "peer_address": { 00:14:15.960 "trtype": "TCP", 00:14:15.960 "adrfam": "IPv4", 00:14:15.960 "traddr": "10.0.0.1", 00:14:15.960 "trsvcid": "50126" 00:14:15.960 }, 00:14:15.960 "auth": { 00:14:15.960 "state": "completed", 00:14:15.960 "digest": "sha256", 00:14:15.960 "dhgroup": "null" 00:14:15.960 } 00:14:15.960 } 00:14:15.960 ]' 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.960 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.219 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.219 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.219 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.219 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.219 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.219 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:16.219 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.785 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.044 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.302 00:14:17.302 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.302 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.302 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.561 { 00:14:17.561 "cntlid": 5, 00:14:17.561 "qid": 0, 00:14:17.561 "state": "enabled", 00:14:17.561 "thread": "nvmf_tgt_poll_group_000", 00:14:17.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:17.561 "listen_address": { 00:14:17.561 "trtype": "TCP", 00:14:17.561 "adrfam": "IPv4", 00:14:17.561 "traddr": "10.0.0.2", 00:14:17.561 "trsvcid": "4420" 00:14:17.561 }, 00:14:17.561 "peer_address": { 00:14:17.561 "trtype": "TCP", 00:14:17.561 "adrfam": "IPv4", 00:14:17.561 "traddr": "10.0.0.1", 00:14:17.561 "trsvcid": "37772" 00:14:17.561 }, 00:14:17.561 "auth": { 00:14:17.561 "state": "completed", 00:14:17.561 "digest": "sha256", 00:14:17.561 "dhgroup": "null" 00:14:17.561 } 00:14:17.561 } 00:14:17.561 ]' 00:14:17.561 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.562 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.820 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:17.820 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.386 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.644 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.902 00:14:18.903 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.903 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.903 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.160 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.160 { 00:14:19.160 "cntlid": 7, 00:14:19.160 "qid": 0, 00:14:19.160 "state": "enabled", 00:14:19.160 "thread": "nvmf_tgt_poll_group_000", 00:14:19.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:19.161 "listen_address": { 00:14:19.161 "trtype": "TCP", 00:14:19.161 "adrfam": "IPv4", 00:14:19.161 "traddr": "10.0.0.2", 00:14:19.161 "trsvcid": "4420" 00:14:19.161 }, 00:14:19.161 "peer_address": { 00:14:19.161 "trtype": "TCP", 00:14:19.161 "adrfam": "IPv4", 00:14:19.161 "traddr": "10.0.0.1", 00:14:19.161 "trsvcid": "37806" 00:14:19.161 }, 00:14:19.161 "auth": { 00:14:19.161 "state": "completed", 00:14:19.161 "digest": "sha256", 00:14:19.161 "dhgroup": "null" 00:14:19.161 } 00:14:19.161 } 00:14:19.161 ]' 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.161 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.419 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:19.419 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.985 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.244 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.503 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.503 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.762 { 00:14:20.762 "cntlid": 9, 00:14:20.762 "qid": 0, 00:14:20.762 "state": "enabled", 00:14:20.762 "thread": "nvmf_tgt_poll_group_000", 00:14:20.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:20.762 "listen_address": { 00:14:20.762 "trtype": "TCP", 00:14:20.762 "adrfam": "IPv4", 00:14:20.762 "traddr": "10.0.0.2", 00:14:20.762 "trsvcid": "4420" 00:14:20.762 }, 00:14:20.762 "peer_address": { 00:14:20.762 "trtype": "TCP", 00:14:20.762 "adrfam": "IPv4", 00:14:20.762 "traddr": "10.0.0.1", 00:14:20.762 "trsvcid": "37830" 00:14:20.762 }, 00:14:20.762 "auth": { 00:14:20.762 "state": "completed", 00:14:20.762 "digest": "sha256", 00:14:20.762 "dhgroup": "ffdhe2048" 00:14:20.762 } 00:14:20.762 } 00:14:20.762 ]' 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.762 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.021 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:21.021 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.588 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.846 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.104 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.104 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.104 { 00:14:22.104 "cntlid": 11, 00:14:22.104 "qid": 0, 00:14:22.104 "state": "enabled", 00:14:22.104 "thread": "nvmf_tgt_poll_group_000", 00:14:22.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:22.104 "listen_address": { 00:14:22.104 "trtype": "TCP", 00:14:22.104 "adrfam": "IPv4", 00:14:22.104 "traddr": "10.0.0.2", 00:14:22.104 "trsvcid": "4420" 00:14:22.104 }, 00:14:22.104 "peer_address": { 00:14:22.104 "trtype": "TCP", 00:14:22.104 "adrfam": "IPv4", 00:14:22.104 "traddr": "10.0.0.1", 00:14:22.104 "trsvcid": "37842" 00:14:22.104 }, 00:14:22.104 "auth": { 00:14:22.104 "state": "completed", 00:14:22.104 "digest": "sha256", 00:14:22.104 "dhgroup": "ffdhe2048" 00:14:22.104 } 00:14:22.104 } 00:14:22.104 ]' 00:14:22.363 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.363 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.363 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.363 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.363 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.363 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.363 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.363 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.621 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:22.622 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.353 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.353 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:23.353 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.353 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.354 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.612 00:14:23.612 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.612 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.612 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.871 { 00:14:23.871 "cntlid": 13, 00:14:23.871 "qid": 0, 00:14:23.871 "state": "enabled", 00:14:23.871 "thread": "nvmf_tgt_poll_group_000", 00:14:23.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:23.871 "listen_address": { 00:14:23.871 "trtype": "TCP", 00:14:23.871 "adrfam": "IPv4", 00:14:23.871 "traddr": "10.0.0.2", 00:14:23.871 "trsvcid": "4420" 00:14:23.871 }, 00:14:23.871 "peer_address": { 00:14:23.871 "trtype": "TCP", 00:14:23.871 "adrfam": "IPv4", 00:14:23.871 "traddr": "10.0.0.1", 00:14:23.871 "trsvcid": "37874" 00:14:23.871 }, 00:14:23.871 "auth": { 00:14:23.871 "state": "completed", 00:14:23.871 "digest": "sha256", 00:14:23.871 "dhgroup": "ffdhe2048" 00:14:23.871 } 00:14:23.871 } 00:14:23.871 ]' 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.871 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.130 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:24.130 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.698 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.956 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.957 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:24.957 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.957 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.215 00:14:25.215 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.215 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.215 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.215 { 00:14:25.215 "cntlid": 15, 00:14:25.215 "qid": 0, 00:14:25.215 "state": "enabled", 00:14:25.215 "thread": "nvmf_tgt_poll_group_000", 00:14:25.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:25.215 "listen_address": { 00:14:25.215 "trtype": "TCP", 00:14:25.215 "adrfam": "IPv4", 00:14:25.215 "traddr": "10.0.0.2", 00:14:25.215 "trsvcid": "4420" 00:14:25.215 }, 00:14:25.215 "peer_address": { 00:14:25.215 "trtype": "TCP", 00:14:25.215 "adrfam": "IPv4", 00:14:25.215 "traddr": "10.0.0.1", 00:14:25.215 "trsvcid": "37902" 00:14:25.215 }, 00:14:25.215 "auth": { 00:14:25.215 "state": "completed", 00:14:25.215 "digest": "sha256", 00:14:25.215 "dhgroup": "ffdhe2048" 00:14:25.215 } 00:14:25.215 } 00:14:25.215 ]' 00:14:25.215 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.474 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.733 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:25.733 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:26.299 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.299 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.558 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.558 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.558 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.558 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.558 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.817 { 00:14:26.817 "cntlid": 17, 00:14:26.817 "qid": 0, 00:14:26.817 "state": "enabled", 00:14:26.817 "thread": "nvmf_tgt_poll_group_000", 00:14:26.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:26.817 "listen_address": { 00:14:26.817 "trtype": "TCP", 00:14:26.817 "adrfam": "IPv4", 00:14:26.817 "traddr": "10.0.0.2", 00:14:26.817 "trsvcid": "4420" 00:14:26.817 }, 00:14:26.817 "peer_address": { 00:14:26.817 "trtype": "TCP", 00:14:26.817 "adrfam": "IPv4", 00:14:26.817 "traddr": "10.0.0.1", 00:14:26.817 "trsvcid": "37938" 00:14:26.817 }, 00:14:26.817 "auth": { 00:14:26.817 "state": "completed", 00:14:26.817 "digest": "sha256", 00:14:26.817 "dhgroup": "ffdhe3072" 00:14:26.817 } 00:14:26.817 } 00:14:26.817 ]' 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.817 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.076 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.335 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:27.335 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:27.902 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.903 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.161 00:14:28.161 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.162 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.162 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.421 { 00:14:28.421 "cntlid": 19, 00:14:28.421 "qid": 0, 00:14:28.421 "state": "enabled", 00:14:28.421 "thread": "nvmf_tgt_poll_group_000", 00:14:28.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:28.421 "listen_address": { 00:14:28.421 "trtype": "TCP", 00:14:28.421 "adrfam": "IPv4", 00:14:28.421 "traddr": "10.0.0.2", 00:14:28.421 "trsvcid": "4420" 00:14:28.421 }, 00:14:28.421 "peer_address": { 00:14:28.421 "trtype": "TCP", 00:14:28.421 "adrfam": "IPv4", 00:14:28.421 "traddr": "10.0.0.1", 00:14:28.421 "trsvcid": "37864" 00:14:28.421 }, 00:14:28.421 "auth": { 00:14:28.421 "state": "completed", 00:14:28.421 "digest": "sha256", 00:14:28.421 "dhgroup": "ffdhe3072" 00:14:28.421 } 00:14:28.421 } 00:14:28.421 ]' 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.421 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.680 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.680 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.680 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.680 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:28.680 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.248 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.507 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.766 00:14:29.766 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.766 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.766 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.025 { 00:14:30.025 "cntlid": 21, 00:14:30.025 "qid": 0, 00:14:30.025 "state": "enabled", 00:14:30.025 "thread": "nvmf_tgt_poll_group_000", 00:14:30.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:30.025 "listen_address": { 00:14:30.025 "trtype": "TCP", 00:14:30.025 "adrfam": "IPv4", 00:14:30.025 "traddr": "10.0.0.2", 00:14:30.025 "trsvcid": "4420" 00:14:30.025 }, 00:14:30.025 "peer_address": { 00:14:30.025 "trtype": "TCP", 00:14:30.025 "adrfam": "IPv4", 00:14:30.025 "traddr": "10.0.0.1", 00:14:30.025 "trsvcid": "37886" 00:14:30.025 }, 00:14:30.025 "auth": { 00:14:30.025 "state": "completed", 00:14:30.025 "digest": "sha256", 00:14:30.025 "dhgroup": "ffdhe3072" 00:14:30.025 } 00:14:30.025 } 00:14:30.025 ]' 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.025 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.284 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:30.284 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.851 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.110 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.369 00:14:31.369 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.369 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.369 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.628 { 00:14:31.628 "cntlid": 23, 00:14:31.628 "qid": 0, 00:14:31.628 "state": "enabled", 00:14:31.628 "thread": "nvmf_tgt_poll_group_000", 00:14:31.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:31.628 "listen_address": { 00:14:31.628 "trtype": "TCP", 00:14:31.628 "adrfam": "IPv4", 00:14:31.628 "traddr": "10.0.0.2", 00:14:31.628 "trsvcid": "4420" 00:14:31.628 }, 00:14:31.628 "peer_address": { 00:14:31.628 "trtype": "TCP", 00:14:31.628 "adrfam": "IPv4", 00:14:31.628 "traddr": "10.0.0.1", 00:14:31.628 "trsvcid": "37912" 00:14:31.628 }, 00:14:31.628 "auth": { 00:14:31.628 "state": "completed", 00:14:31.628 "digest": "sha256", 00:14:31.628 "dhgroup": "ffdhe3072" 00:14:31.628 } 00:14:31.628 } 00:14:31.628 ]' 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.887 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:31.887 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.456 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.715 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.978 00:14:32.978 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.978 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.978 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.240 { 00:14:33.240 "cntlid": 25, 00:14:33.240 "qid": 0, 00:14:33.240 "state": "enabled", 00:14:33.240 "thread": "nvmf_tgt_poll_group_000", 00:14:33.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:33.240 "listen_address": { 00:14:33.240 "trtype": "TCP", 00:14:33.240 "adrfam": "IPv4", 00:14:33.240 "traddr": "10.0.0.2", 00:14:33.240 "trsvcid": "4420" 00:14:33.240 }, 00:14:33.240 "peer_address": { 00:14:33.240 "trtype": "TCP", 00:14:33.240 "adrfam": "IPv4", 00:14:33.240 "traddr": "10.0.0.1", 00:14:33.240 "trsvcid": "37940" 00:14:33.240 }, 00:14:33.240 "auth": { 00:14:33.240 "state": "completed", 00:14:33.240 "digest": "sha256", 00:14:33.240 "dhgroup": "ffdhe4096" 00:14:33.240 } 00:14:33.240 } 00:14:33.240 ]' 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.240 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.499 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:33.499 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:34.067 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.326 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.585 00:14:34.585 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.585 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.585 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.844 { 00:14:34.844 "cntlid": 27, 00:14:34.844 "qid": 0, 00:14:34.844 "state": "enabled", 00:14:34.844 "thread": "nvmf_tgt_poll_group_000", 00:14:34.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:34.844 "listen_address": { 00:14:34.844 "trtype": "TCP", 00:14:34.844 "adrfam": "IPv4", 00:14:34.844 "traddr": "10.0.0.2", 00:14:34.844 "trsvcid": "4420" 00:14:34.844 }, 00:14:34.844 "peer_address": { 00:14:34.844 "trtype": "TCP", 00:14:34.844 "adrfam": "IPv4", 00:14:34.844 "traddr": "10.0.0.1", 00:14:34.844 "trsvcid": "37972" 00:14:34.844 }, 00:14:34.844 "auth": { 00:14:34.844 "state": "completed", 00:14:34.844 "digest": "sha256", 00:14:34.844 "dhgroup": "ffdhe4096" 00:14:34.844 } 00:14:34.844 } 00:14:34.844 ]' 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.844 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.103 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:35.103 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.792 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.050 00:14:36.050 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.050 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.050 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.308 { 00:14:36.308 "cntlid": 29, 00:14:36.308 "qid": 0, 00:14:36.308 "state": "enabled", 00:14:36.308 "thread": "nvmf_tgt_poll_group_000", 00:14:36.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:36.308 "listen_address": { 00:14:36.308 "trtype": "TCP", 00:14:36.308 "adrfam": "IPv4", 00:14:36.308 "traddr": "10.0.0.2", 00:14:36.308 "trsvcid": "4420" 00:14:36.308 }, 00:14:36.308 "peer_address": { 00:14:36.308 "trtype": "TCP", 00:14:36.308 "adrfam": "IPv4", 00:14:36.308 "traddr": "10.0.0.1", 00:14:36.308 "trsvcid": "38000" 00:14:36.308 }, 00:14:36.308 "auth": { 00:14:36.308 "state": "completed", 00:14:36.308 "digest": "sha256", 00:14:36.308 "dhgroup": "ffdhe4096" 00:14:36.308 } 00:14:36.308 } 00:14:36.308 ]' 00:14:36.308 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.308 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.567 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:36.567 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.133 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.392 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.651 00:14:37.651 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.651 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.651 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.909 { 00:14:37.909 "cntlid": 31, 00:14:37.909 "qid": 0, 00:14:37.909 "state": "enabled", 00:14:37.909 "thread": "nvmf_tgt_poll_group_000", 00:14:37.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:37.909 "listen_address": { 00:14:37.909 "trtype": "TCP", 00:14:37.909 "adrfam": "IPv4", 00:14:37.909 "traddr": "10.0.0.2", 00:14:37.909 "trsvcid": "4420" 00:14:37.909 }, 00:14:37.909 "peer_address": { 00:14:37.909 "trtype": "TCP", 00:14:37.909 "adrfam": "IPv4", 00:14:37.909 "traddr": "10.0.0.1", 00:14:37.909 "trsvcid": "35386" 00:14:37.909 }, 00:14:37.909 "auth": { 00:14:37.909 "state": "completed", 00:14:37.909 "digest": "sha256", 00:14:37.909 "dhgroup": "ffdhe4096" 00:14:37.909 } 00:14:37.909 } 00:14:37.909 ]' 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.909 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.910 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.910 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.910 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.910 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.168 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:38.168 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.735 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.994 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:38.994 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.994 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.995 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.253 00:14:39.254 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.254 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.254 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.512 { 00:14:39.512 "cntlid": 33, 00:14:39.512 "qid": 0, 00:14:39.512 "state": "enabled", 00:14:39.512 "thread": "nvmf_tgt_poll_group_000", 00:14:39.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:39.512 "listen_address": { 00:14:39.512 "trtype": "TCP", 00:14:39.512 "adrfam": "IPv4", 00:14:39.512 "traddr": "10.0.0.2", 00:14:39.512 "trsvcid": "4420" 00:14:39.512 }, 00:14:39.512 "peer_address": { 00:14:39.512 "trtype": "TCP", 00:14:39.512 "adrfam": "IPv4", 00:14:39.512 "traddr": "10.0.0.1", 00:14:39.512 "trsvcid": "35414" 00:14:39.512 }, 00:14:39.512 "auth": { 00:14:39.512 "state": "completed", 00:14:39.512 "digest": "sha256", 00:14:39.512 "dhgroup": "ffdhe6144" 00:14:39.512 } 00:14:39.512 } 00:14:39.512 ]' 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.512 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.770 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:39.770 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:40.337 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.337 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:40.337 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.337 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.337 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.337 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.337 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.337 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.595 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.854 00:14:40.854 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.854 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.854 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.113 { 00:14:41.113 "cntlid": 35, 00:14:41.113 "qid": 0, 00:14:41.113 "state": "enabled", 00:14:41.113 "thread": "nvmf_tgt_poll_group_000", 00:14:41.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:41.113 "listen_address": { 00:14:41.113 "trtype": "TCP", 00:14:41.113 "adrfam": "IPv4", 00:14:41.113 "traddr": "10.0.0.2", 00:14:41.113 "trsvcid": "4420" 00:14:41.113 }, 00:14:41.113 "peer_address": { 00:14:41.113 "trtype": "TCP", 00:14:41.113 "adrfam": "IPv4", 00:14:41.113 "traddr": "10.0.0.1", 00:14:41.113 "trsvcid": "35428" 00:14:41.113 }, 00:14:41.113 "auth": { 00:14:41.113 "state": "completed", 00:14:41.113 "digest": "sha256", 00:14:41.113 "dhgroup": "ffdhe6144" 00:14:41.113 } 00:14:41.113 } 00:14:41.113 ]' 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.113 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.372 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:41.372 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.938 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.197 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.456 00:14:42.456 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.456 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.456 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.715 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.715 { 00:14:42.715 "cntlid": 37, 00:14:42.715 "qid": 0, 00:14:42.715 "state": "enabled", 00:14:42.715 "thread": "nvmf_tgt_poll_group_000", 00:14:42.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:42.715 "listen_address": { 00:14:42.715 "trtype": "TCP", 00:14:42.715 "adrfam": "IPv4", 00:14:42.715 "traddr": "10.0.0.2", 00:14:42.715 "trsvcid": "4420" 00:14:42.715 }, 00:14:42.715 "peer_address": { 00:14:42.715 "trtype": "TCP", 00:14:42.715 "adrfam": "IPv4", 00:14:42.715 "traddr": "10.0.0.1", 00:14:42.715 "trsvcid": "35460" 00:14:42.715 }, 00:14:42.715 "auth": { 00:14:42.715 "state": "completed", 00:14:42.715 "digest": "sha256", 00:14:42.715 "dhgroup": "ffdhe6144" 00:14:42.715 } 00:14:42.715 } 00:14:42.715 ]' 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.716 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.974 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:42.974 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.541 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.799 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.057 00:14:44.057 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.057 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.057 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.316 { 00:14:44.316 "cntlid": 39, 00:14:44.316 "qid": 0, 00:14:44.316 "state": "enabled", 00:14:44.316 "thread": "nvmf_tgt_poll_group_000", 00:14:44.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:44.316 "listen_address": { 00:14:44.316 "trtype": "TCP", 00:14:44.316 "adrfam": "IPv4", 00:14:44.316 "traddr": "10.0.0.2", 00:14:44.316 "trsvcid": "4420" 00:14:44.316 }, 00:14:44.316 "peer_address": { 00:14:44.316 "trtype": "TCP", 00:14:44.316 "adrfam": "IPv4", 00:14:44.316 "traddr": "10.0.0.1", 00:14:44.316 "trsvcid": "35492" 00:14:44.316 }, 00:14:44.316 "auth": { 00:14:44.316 "state": "completed", 00:14:44.316 "digest": "sha256", 00:14:44.316 "dhgroup": "ffdhe6144" 00:14:44.316 } 00:14:44.316 } 00:14:44.316 ]' 00:14:44.316 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.316 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.582 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:44.582 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.149 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.407 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.974 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.974 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.974 { 00:14:45.975 "cntlid": 41, 00:14:45.975 "qid": 0, 00:14:45.975 "state": "enabled", 00:14:45.975 "thread": "nvmf_tgt_poll_group_000", 00:14:45.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:45.975 "listen_address": { 00:14:45.975 "trtype": "TCP", 00:14:45.975 "adrfam": "IPv4", 00:14:45.975 "traddr": "10.0.0.2", 00:14:45.975 "trsvcid": "4420" 00:14:45.975 }, 00:14:45.975 "peer_address": { 00:14:45.975 "trtype": "TCP", 00:14:45.975 "adrfam": "IPv4", 00:14:45.975 "traddr": "10.0.0.1", 00:14:45.975 "trsvcid": "35520" 00:14:45.975 }, 00:14:45.975 "auth": { 00:14:45.975 "state": "completed", 00:14:45.975 "digest": "sha256", 00:14:45.975 "dhgroup": "ffdhe8192" 00:14:45.975 } 00:14:45.975 } 00:14:45.975 ]' 00:14:45.975 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.975 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.975 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.975 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.975 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.233 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.233 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.233 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.233 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:46.233 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.799 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.058 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.625 00:14:47.625 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.625 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.625 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.884 { 00:14:47.884 "cntlid": 43, 00:14:47.884 "qid": 0, 00:14:47.884 "state": "enabled", 00:14:47.884 "thread": "nvmf_tgt_poll_group_000", 00:14:47.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:47.884 "listen_address": { 00:14:47.884 "trtype": "TCP", 00:14:47.884 "adrfam": "IPv4", 00:14:47.884 "traddr": "10.0.0.2", 00:14:47.884 "trsvcid": "4420" 00:14:47.884 }, 00:14:47.884 "peer_address": { 00:14:47.884 "trtype": "TCP", 00:14:47.884 "adrfam": "IPv4", 00:14:47.884 "traddr": "10.0.0.1", 00:14:47.884 "trsvcid": "35612" 00:14:47.884 }, 00:14:47.884 "auth": { 00:14:47.884 "state": "completed", 00:14:47.884 "digest": "sha256", 00:14:47.884 "dhgroup": "ffdhe8192" 00:14:47.884 } 00:14:47.884 } 00:14:47.884 ]' 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.884 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.885 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.885 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.885 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.885 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.143 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:48.143 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.711 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.969 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.228 00:14:49.228 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.228 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.228 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.486 { 00:14:49.486 "cntlid": 45, 00:14:49.486 "qid": 0, 00:14:49.486 "state": "enabled", 00:14:49.486 "thread": "nvmf_tgt_poll_group_000", 00:14:49.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:49.486 "listen_address": { 00:14:49.486 "trtype": "TCP", 00:14:49.486 "adrfam": "IPv4", 00:14:49.486 "traddr": "10.0.0.2", 00:14:49.486 "trsvcid": "4420" 00:14:49.486 }, 00:14:49.486 "peer_address": { 00:14:49.486 "trtype": "TCP", 00:14:49.486 "adrfam": "IPv4", 00:14:49.486 "traddr": "10.0.0.1", 00:14:49.486 "trsvcid": "35642" 00:14:49.486 }, 00:14:49.486 "auth": { 00:14:49.486 "state": "completed", 00:14:49.486 "digest": "sha256", 00:14:49.486 "dhgroup": "ffdhe8192" 00:14:49.486 } 00:14:49.486 } 00:14:49.486 ]' 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.486 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.744 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.744 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.744 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.744 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:49.745 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.311 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.569 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:50.569 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.569 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.569 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.570 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.138 00:14:51.138 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.138 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.138 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.400 { 00:14:51.400 "cntlid": 47, 00:14:51.400 "qid": 0, 00:14:51.400 "state": "enabled", 00:14:51.400 "thread": "nvmf_tgt_poll_group_000", 00:14:51.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:51.400 "listen_address": { 00:14:51.400 "trtype": "TCP", 00:14:51.400 "adrfam": "IPv4", 00:14:51.400 "traddr": "10.0.0.2", 00:14:51.400 "trsvcid": "4420" 00:14:51.400 }, 00:14:51.400 "peer_address": { 00:14:51.400 "trtype": "TCP", 00:14:51.400 "adrfam": "IPv4", 00:14:51.400 "traddr": "10.0.0.1", 00:14:51.400 "trsvcid": "35670" 00:14:51.400 }, 00:14:51.400 "auth": { 00:14:51.400 "state": "completed", 00:14:51.400 "digest": "sha256", 00:14:51.400 "dhgroup": "ffdhe8192" 00:14:51.400 } 00:14:51.400 } 00:14:51.400 ]' 00:14:51.400 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.400 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.658 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:51.659 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.225 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.483 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.742 00:14:52.742 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.742 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.742 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.742 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.743 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.743 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.743 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.016 { 00:14:53.016 "cntlid": 49, 00:14:53.016 "qid": 0, 00:14:53.016 "state": "enabled", 00:14:53.016 "thread": "nvmf_tgt_poll_group_000", 00:14:53.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:53.016 "listen_address": { 00:14:53.016 "trtype": "TCP", 00:14:53.016 "adrfam": "IPv4", 00:14:53.016 "traddr": "10.0.0.2", 00:14:53.016 "trsvcid": "4420" 00:14:53.016 }, 00:14:53.016 "peer_address": { 00:14:53.016 "trtype": "TCP", 00:14:53.016 "adrfam": "IPv4", 00:14:53.016 "traddr": "10.0.0.1", 00:14:53.016 "trsvcid": "35686" 00:14:53.016 }, 00:14:53.016 "auth": { 00:14:53.016 "state": "completed", 00:14:53.016 "digest": "sha384", 00:14:53.016 "dhgroup": "null" 00:14:53.016 } 00:14:53.016 } 00:14:53.016 ]' 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.016 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.280 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:53.280 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.847 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.106 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.106 00:14:54.365 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.365 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.365 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.365 { 00:14:54.365 "cntlid": 51, 00:14:54.365 "qid": 0, 00:14:54.365 "state": "enabled", 00:14:54.365 "thread": "nvmf_tgt_poll_group_000", 00:14:54.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:54.365 "listen_address": { 00:14:54.365 "trtype": "TCP", 00:14:54.365 "adrfam": "IPv4", 00:14:54.365 "traddr": "10.0.0.2", 00:14:54.365 "trsvcid": "4420" 00:14:54.365 }, 00:14:54.365 "peer_address": { 00:14:54.365 "trtype": "TCP", 00:14:54.365 "adrfam": "IPv4", 00:14:54.365 "traddr": "10.0.0.1", 00:14:54.365 "trsvcid": "35720" 00:14:54.365 }, 00:14:54.365 "auth": { 00:14:54.365 "state": "completed", 00:14:54.365 "digest": "sha384", 00:14:54.365 "dhgroup": "null" 00:14:54.365 } 00:14:54.365 } 00:14:54.365 ]' 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.365 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.624 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:54.624 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.624 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.624 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.624 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.883 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:54.883 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:14:55.450 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.450 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.451 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.709 00:14:55.709 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.709 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.709 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.968 { 00:14:55.968 "cntlid": 53, 00:14:55.968 "qid": 0, 00:14:55.968 "state": "enabled", 00:14:55.968 "thread": "nvmf_tgt_poll_group_000", 00:14:55.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:55.968 "listen_address": { 00:14:55.968 "trtype": "TCP", 00:14:55.968 "adrfam": "IPv4", 00:14:55.968 "traddr": "10.0.0.2", 00:14:55.968 "trsvcid": "4420" 00:14:55.968 }, 00:14:55.968 "peer_address": { 00:14:55.968 "trtype": "TCP", 00:14:55.968 "adrfam": "IPv4", 00:14:55.968 "traddr": "10.0.0.1", 00:14:55.968 "trsvcid": "35752" 00:14:55.968 }, 00:14:55.968 "auth": { 00:14:55.968 "state": "completed", 00:14:55.968 "digest": "sha384", 00:14:55.968 "dhgroup": "null" 00:14:55.968 } 00:14:55.968 } 00:14:55.968 ]' 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.968 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.227 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:56.227 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.794 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:57.052 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.053 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.312 00:14:57.312 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.312 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.312 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.571 { 00:14:57.571 "cntlid": 55, 00:14:57.571 "qid": 0, 00:14:57.571 "state": "enabled", 00:14:57.571 "thread": "nvmf_tgt_poll_group_000", 00:14:57.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:57.571 "listen_address": { 00:14:57.571 "trtype": "TCP", 00:14:57.571 "adrfam": "IPv4", 00:14:57.571 "traddr": "10.0.0.2", 00:14:57.571 "trsvcid": "4420" 00:14:57.571 }, 00:14:57.571 "peer_address": { 00:14:57.571 "trtype": "TCP", 00:14:57.571 "adrfam": "IPv4", 00:14:57.571 "traddr": "10.0.0.1", 00:14:57.571 "trsvcid": "39368" 00:14:57.571 }, 00:14:57.571 "auth": { 00:14:57.571 "state": "completed", 00:14:57.571 "digest": "sha384", 00:14:57.571 "dhgroup": "null" 00:14:57.571 } 00:14:57.571 } 00:14:57.571 ]' 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.571 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.830 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:57.830 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.398 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.656 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.915 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.915 { 00:14:58.915 "cntlid": 57, 00:14:58.915 "qid": 0, 00:14:58.915 "state": "enabled", 00:14:58.915 "thread": "nvmf_tgt_poll_group_000", 00:14:58.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:14:58.915 "listen_address": { 00:14:58.915 "trtype": "TCP", 00:14:58.915 "adrfam": "IPv4", 00:14:58.915 "traddr": "10.0.0.2", 00:14:58.915 "trsvcid": "4420" 00:14:58.915 }, 00:14:58.915 "peer_address": { 00:14:58.915 "trtype": "TCP", 00:14:58.915 "adrfam": "IPv4", 00:14:58.915 "traddr": "10.0.0.1", 00:14:58.915 "trsvcid": "39388" 00:14:58.915 }, 00:14:58.915 "auth": { 00:14:58.915 "state": "completed", 00:14:58.915 "digest": "sha384", 00:14:58.915 "dhgroup": "ffdhe2048" 00:14:58.915 } 00:14:58.915 } 00:14:58.915 ]' 00:14:58.915 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.175 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.434 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:14:59.434 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.002 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.261 00:15:00.261 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.261 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.261 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.520 { 00:15:00.520 "cntlid": 59, 00:15:00.520 "qid": 0, 00:15:00.520 "state": "enabled", 00:15:00.520 "thread": "nvmf_tgt_poll_group_000", 00:15:00.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:00.520 "listen_address": { 00:15:00.520 "trtype": "TCP", 00:15:00.520 "adrfam": "IPv4", 00:15:00.520 "traddr": "10.0.0.2", 00:15:00.520 "trsvcid": "4420" 00:15:00.520 }, 00:15:00.520 "peer_address": { 00:15:00.520 "trtype": "TCP", 00:15:00.520 "adrfam": "IPv4", 00:15:00.520 "traddr": "10.0.0.1", 00:15:00.520 "trsvcid": "39400" 00:15:00.520 }, 00:15:00.520 "auth": { 00:15:00.520 "state": "completed", 00:15:00.520 "digest": "sha384", 00:15:00.520 "dhgroup": "ffdhe2048" 00:15:00.520 } 00:15:00.520 } 00:15:00.520 ]' 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.520 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.778 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:00.779 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.346 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.605 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.863 00:15:01.863 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.863 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.863 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.122 { 00:15:02.122 "cntlid": 61, 00:15:02.122 "qid": 0, 00:15:02.122 "state": "enabled", 00:15:02.122 "thread": "nvmf_tgt_poll_group_000", 00:15:02.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:02.122 "listen_address": { 00:15:02.122 "trtype": "TCP", 00:15:02.122 "adrfam": "IPv4", 00:15:02.122 "traddr": "10.0.0.2", 00:15:02.122 "trsvcid": "4420" 00:15:02.122 }, 00:15:02.122 "peer_address": { 00:15:02.122 "trtype": "TCP", 00:15:02.122 "adrfam": "IPv4", 00:15:02.122 "traddr": "10.0.0.1", 00:15:02.122 "trsvcid": "39432" 00:15:02.122 }, 00:15:02.122 "auth": { 00:15:02.122 "state": "completed", 00:15:02.122 "digest": "sha384", 00:15:02.122 "dhgroup": "ffdhe2048" 00:15:02.122 } 00:15:02.122 } 00:15:02.122 ]' 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.122 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.380 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:02.380 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.952 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.214 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.473 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.473 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.474 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.474 { 00:15:03.474 "cntlid": 63, 00:15:03.474 "qid": 0, 00:15:03.474 "state": "enabled", 00:15:03.474 "thread": "nvmf_tgt_poll_group_000", 00:15:03.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:03.474 "listen_address": { 00:15:03.474 "trtype": "TCP", 00:15:03.474 "adrfam": "IPv4", 00:15:03.474 "traddr": "10.0.0.2", 00:15:03.474 "trsvcid": "4420" 00:15:03.474 }, 00:15:03.474 "peer_address": { 00:15:03.474 "trtype": "TCP", 00:15:03.474 "adrfam": "IPv4", 00:15:03.474 "traddr": "10.0.0.1", 00:15:03.474 "trsvcid": "39456" 00:15:03.474 }, 00:15:03.474 "auth": { 00:15:03.474 "state": "completed", 00:15:03.474 "digest": "sha384", 00:15:03.474 "dhgroup": "ffdhe2048" 00:15:03.474 } 00:15:03.474 } 00:15:03.474 ]' 00:15:03.474 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.732 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.991 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:03.991 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.557 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.816 00:15:04.816 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.816 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.816 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.075 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.076 { 00:15:05.076 "cntlid": 65, 00:15:05.076 "qid": 0, 00:15:05.076 "state": "enabled", 00:15:05.076 "thread": "nvmf_tgt_poll_group_000", 00:15:05.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:05.076 "listen_address": { 00:15:05.076 "trtype": "TCP", 00:15:05.076 "adrfam": "IPv4", 00:15:05.076 "traddr": "10.0.0.2", 00:15:05.076 "trsvcid": "4420" 00:15:05.076 }, 00:15:05.076 "peer_address": { 00:15:05.076 "trtype": "TCP", 00:15:05.076 "adrfam": "IPv4", 00:15:05.076 "traddr": "10.0.0.1", 00:15:05.076 "trsvcid": "39498" 00:15:05.076 }, 00:15:05.076 "auth": { 00:15:05.076 "state": "completed", 00:15:05.076 "digest": "sha384", 00:15:05.076 "dhgroup": "ffdhe3072" 00:15:05.076 } 00:15:05.076 } 00:15:05.076 ]' 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.076 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.335 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.335 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.335 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.335 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:05.335 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:05.903 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.162 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.421 00:15:06.421 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.421 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.421 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.680 { 00:15:06.680 "cntlid": 67, 00:15:06.680 "qid": 0, 00:15:06.680 "state": "enabled", 00:15:06.680 "thread": "nvmf_tgt_poll_group_000", 00:15:06.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:06.680 "listen_address": { 00:15:06.680 "trtype": "TCP", 00:15:06.680 "adrfam": "IPv4", 00:15:06.680 "traddr": "10.0.0.2", 00:15:06.680 "trsvcid": "4420" 00:15:06.680 }, 00:15:06.680 "peer_address": { 00:15:06.680 "trtype": "TCP", 00:15:06.680 "adrfam": "IPv4", 00:15:06.680 "traddr": "10.0.0.1", 00:15:06.680 "trsvcid": "39532" 00:15:06.680 }, 00:15:06.680 "auth": { 00:15:06.680 "state": "completed", 00:15:06.680 "digest": "sha384", 00:15:06.680 "dhgroup": "ffdhe3072" 00:15:06.680 } 00:15:06.680 } 00:15:06.680 ]' 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.680 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.939 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:06.939 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.506 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.765 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.023 00:15:08.023 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.023 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.023 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.281 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.281 { 00:15:08.281 "cntlid": 69, 00:15:08.281 "qid": 0, 00:15:08.281 "state": "enabled", 00:15:08.281 "thread": "nvmf_tgt_poll_group_000", 00:15:08.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:08.281 "listen_address": { 00:15:08.281 "trtype": "TCP", 00:15:08.281 "adrfam": "IPv4", 00:15:08.281 "traddr": "10.0.0.2", 00:15:08.281 "trsvcid": "4420" 00:15:08.281 }, 00:15:08.281 "peer_address": { 00:15:08.281 "trtype": "TCP", 00:15:08.281 "adrfam": "IPv4", 00:15:08.281 "traddr": "10.0.0.1", 00:15:08.281 "trsvcid": "49146" 00:15:08.281 }, 00:15:08.282 "auth": { 00:15:08.282 "state": "completed", 00:15:08.282 "digest": "sha384", 00:15:08.282 "dhgroup": "ffdhe3072" 00:15:08.282 } 00:15:08.282 } 00:15:08.282 ]' 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.282 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.540 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:08.540 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:09.106 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.106 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:09.106 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.106 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.107 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.107 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.107 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.107 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.365 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.366 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.366 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.366 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.366 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.625 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.625 { 00:15:09.625 "cntlid": 71, 00:15:09.625 "qid": 0, 00:15:09.625 "state": "enabled", 00:15:09.625 "thread": "nvmf_tgt_poll_group_000", 00:15:09.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:09.625 "listen_address": { 00:15:09.625 "trtype": "TCP", 00:15:09.625 "adrfam": "IPv4", 00:15:09.625 "traddr": "10.0.0.2", 00:15:09.625 "trsvcid": "4420" 00:15:09.625 }, 00:15:09.625 "peer_address": { 00:15:09.625 "trtype": "TCP", 00:15:09.625 "adrfam": "IPv4", 00:15:09.625 "traddr": "10.0.0.1", 00:15:09.625 "trsvcid": "49172" 00:15:09.625 }, 00:15:09.625 "auth": { 00:15:09.625 "state": "completed", 00:15:09.625 "digest": "sha384", 00:15:09.625 "dhgroup": "ffdhe3072" 00:15:09.625 } 00:15:09.625 } 00:15:09.625 ]' 00:15:09.625 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.883 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.141 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:10.141 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.706 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.707 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.965 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.224 { 00:15:11.224 "cntlid": 73, 00:15:11.224 "qid": 0, 00:15:11.224 "state": "enabled", 00:15:11.224 "thread": "nvmf_tgt_poll_group_000", 00:15:11.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:11.224 "listen_address": { 00:15:11.224 "trtype": "TCP", 00:15:11.224 "adrfam": "IPv4", 00:15:11.224 "traddr": "10.0.0.2", 00:15:11.224 "trsvcid": "4420" 00:15:11.224 }, 00:15:11.224 "peer_address": { 00:15:11.224 "trtype": "TCP", 00:15:11.224 "adrfam": "IPv4", 00:15:11.224 "traddr": "10.0.0.1", 00:15:11.224 "trsvcid": "49194" 00:15:11.224 }, 00:15:11.224 "auth": { 00:15:11.224 "state": "completed", 00:15:11.224 "digest": "sha384", 00:15:11.224 "dhgroup": "ffdhe4096" 00:15:11.224 } 00:15:11.224 } 00:15:11.224 ]' 00:15:11.224 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.224 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.224 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:11.483 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.049 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.308 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.566 00:15:12.566 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.566 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.566 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.824 { 00:15:12.824 "cntlid": 75, 00:15:12.824 "qid": 0, 00:15:12.824 "state": "enabled", 00:15:12.824 "thread": "nvmf_tgt_poll_group_000", 00:15:12.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:12.824 "listen_address": { 00:15:12.824 "trtype": "TCP", 00:15:12.824 "adrfam": "IPv4", 00:15:12.824 "traddr": "10.0.0.2", 00:15:12.824 "trsvcid": "4420" 00:15:12.824 }, 00:15:12.824 "peer_address": { 00:15:12.824 "trtype": "TCP", 00:15:12.824 "adrfam": "IPv4", 00:15:12.824 "traddr": "10.0.0.1", 00:15:12.824 "trsvcid": "49226" 00:15:12.824 }, 00:15:12.824 "auth": { 00:15:12.824 "state": "completed", 00:15:12.824 "digest": "sha384", 00:15:12.824 "dhgroup": "ffdhe4096" 00:15:12.824 } 00:15:12.824 } 00:15:12.824 ]' 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.824 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.082 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.082 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.082 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.082 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:13.082 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.649 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.908 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.167 00:15:14.167 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.167 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.167 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.425 { 00:15:14.425 "cntlid": 77, 00:15:14.425 "qid": 0, 00:15:14.425 "state": "enabled", 00:15:14.425 "thread": "nvmf_tgt_poll_group_000", 00:15:14.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:14.425 "listen_address": { 00:15:14.425 "trtype": "TCP", 00:15:14.425 "adrfam": "IPv4", 00:15:14.425 "traddr": "10.0.0.2", 00:15:14.425 "trsvcid": "4420" 00:15:14.425 }, 00:15:14.425 "peer_address": { 00:15:14.425 "trtype": "TCP", 00:15:14.425 "adrfam": "IPv4", 00:15:14.425 "traddr": "10.0.0.1", 00:15:14.425 "trsvcid": "49244" 00:15:14.425 }, 00:15:14.425 "auth": { 00:15:14.425 "state": "completed", 00:15:14.425 "digest": "sha384", 00:15:14.425 "dhgroup": "ffdhe4096" 00:15:14.425 } 00:15:14.425 } 00:15:14.425 ]' 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.425 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.684 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.684 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.684 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.684 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:14.684 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:15.251 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:15.252 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.511 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.769 00:15:15.769 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.769 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.769 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.028 { 00:15:16.028 "cntlid": 79, 00:15:16.028 "qid": 0, 00:15:16.028 "state": "enabled", 00:15:16.028 "thread": "nvmf_tgt_poll_group_000", 00:15:16.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:16.028 "listen_address": { 00:15:16.028 "trtype": "TCP", 00:15:16.028 "adrfam": "IPv4", 00:15:16.028 "traddr": "10.0.0.2", 00:15:16.028 "trsvcid": "4420" 00:15:16.028 }, 00:15:16.028 "peer_address": { 00:15:16.028 "trtype": "TCP", 00:15:16.028 "adrfam": "IPv4", 00:15:16.028 "traddr": "10.0.0.1", 00:15:16.028 "trsvcid": "49278" 00:15:16.028 }, 00:15:16.028 "auth": { 00:15:16.028 "state": "completed", 00:15:16.028 "digest": "sha384", 00:15:16.028 "dhgroup": "ffdhe4096" 00:15:16.028 } 00:15:16.028 } 00:15:16.028 ]' 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.028 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.029 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:16.029 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.287 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.287 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.287 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.288 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:16.288 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.853 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.119 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.378 00:15:17.378 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.378 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.378 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.636 { 00:15:17.636 "cntlid": 81, 00:15:17.636 "qid": 0, 00:15:17.636 "state": "enabled", 00:15:17.636 "thread": "nvmf_tgt_poll_group_000", 00:15:17.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:17.636 "listen_address": { 00:15:17.636 "trtype": "TCP", 00:15:17.636 "adrfam": "IPv4", 00:15:17.636 "traddr": "10.0.0.2", 00:15:17.636 "trsvcid": "4420" 00:15:17.636 }, 00:15:17.636 "peer_address": { 00:15:17.636 "trtype": "TCP", 00:15:17.636 "adrfam": "IPv4", 00:15:17.636 "traddr": "10.0.0.1", 00:15:17.636 "trsvcid": "50168" 00:15:17.636 }, 00:15:17.636 "auth": { 00:15:17.636 "state": "completed", 00:15:17.636 "digest": "sha384", 00:15:17.636 "dhgroup": "ffdhe6144" 00:15:17.636 } 00:15:17.636 } 00:15:17.636 ]' 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.636 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.895 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.895 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.895 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.895 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.895 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.154 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:18.154 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.721 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.288 00:15:19.288 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.288 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.288 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.288 { 00:15:19.288 "cntlid": 83, 00:15:19.288 "qid": 0, 00:15:19.288 "state": "enabled", 00:15:19.288 "thread": "nvmf_tgt_poll_group_000", 00:15:19.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:19.288 "listen_address": { 00:15:19.288 "trtype": "TCP", 00:15:19.288 "adrfam": "IPv4", 00:15:19.288 "traddr": "10.0.0.2", 00:15:19.288 "trsvcid": "4420" 00:15:19.288 }, 00:15:19.288 "peer_address": { 00:15:19.288 "trtype": "TCP", 00:15:19.288 "adrfam": "IPv4", 00:15:19.288 "traddr": "10.0.0.1", 00:15:19.288 "trsvcid": "50184" 00:15:19.288 }, 00:15:19.288 "auth": { 00:15:19.288 "state": "completed", 00:15:19.288 "digest": "sha384", 00:15:19.288 "dhgroup": "ffdhe6144" 00:15:19.288 } 00:15:19.288 } 00:15:19.288 ]' 00:15:19.288 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.547 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.805 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:19.805 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:20.372 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.372 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.373 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.373 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.373 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.373 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.373 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.939 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.939 { 00:15:20.939 "cntlid": 85, 00:15:20.939 "qid": 0, 00:15:20.939 "state": "enabled", 00:15:20.939 "thread": "nvmf_tgt_poll_group_000", 00:15:20.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:20.939 "listen_address": { 00:15:20.939 "trtype": "TCP", 00:15:20.939 "adrfam": "IPv4", 00:15:20.939 "traddr": "10.0.0.2", 00:15:20.939 "trsvcid": "4420" 00:15:20.939 }, 00:15:20.939 "peer_address": { 00:15:20.939 "trtype": "TCP", 00:15:20.939 "adrfam": "IPv4", 00:15:20.939 "traddr": "10.0.0.1", 00:15:20.939 "trsvcid": "50202" 00:15:20.939 }, 00:15:20.939 "auth": { 00:15:20.939 "state": "completed", 00:15:20.939 "digest": "sha384", 00:15:20.939 "dhgroup": "ffdhe6144" 00:15:20.939 } 00:15:20.939 } 00:15:20.939 ]' 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.939 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.198 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.198 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.198 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.198 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.198 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.456 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:21.456 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.022 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.588 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.588 { 00:15:22.588 "cntlid": 87, 00:15:22.588 "qid": 0, 00:15:22.588 "state": "enabled", 00:15:22.588 "thread": "nvmf_tgt_poll_group_000", 00:15:22.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:22.588 "listen_address": { 00:15:22.588 "trtype": "TCP", 00:15:22.588 "adrfam": "IPv4", 00:15:22.588 "traddr": "10.0.0.2", 00:15:22.588 "trsvcid": "4420" 00:15:22.588 }, 00:15:22.588 "peer_address": { 00:15:22.588 "trtype": "TCP", 00:15:22.588 "adrfam": "IPv4", 00:15:22.588 "traddr": "10.0.0.1", 00:15:22.588 "trsvcid": "50226" 00:15:22.588 }, 00:15:22.588 "auth": { 00:15:22.588 "state": "completed", 00:15:22.588 "digest": "sha384", 00:15:22.588 "dhgroup": "ffdhe6144" 00:15:22.588 } 00:15:22.588 } 00:15:22.588 ]' 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.588 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:22.846 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.413 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.671 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.238 00:15:24.238 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.238 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.238 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.497 { 00:15:24.497 "cntlid": 89, 00:15:24.497 "qid": 0, 00:15:24.497 "state": "enabled", 00:15:24.497 "thread": "nvmf_tgt_poll_group_000", 00:15:24.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:24.497 "listen_address": { 00:15:24.497 "trtype": "TCP", 00:15:24.497 "adrfam": "IPv4", 00:15:24.497 "traddr": "10.0.0.2", 00:15:24.497 "trsvcid": "4420" 00:15:24.497 }, 00:15:24.498 "peer_address": { 00:15:24.498 "trtype": "TCP", 00:15:24.498 "adrfam": "IPv4", 00:15:24.498 "traddr": "10.0.0.1", 00:15:24.498 "trsvcid": "50260" 00:15:24.498 }, 00:15:24.498 "auth": { 00:15:24.498 "state": "completed", 00:15:24.498 "digest": "sha384", 00:15:24.498 "dhgroup": "ffdhe8192" 00:15:24.498 } 00:15:24.498 } 00:15:24.498 ]' 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.498 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.756 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:24.756 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.324 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.583 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.843 00:15:25.843 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.843 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.843 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.102 { 00:15:26.102 "cntlid": 91, 00:15:26.102 "qid": 0, 00:15:26.102 "state": "enabled", 00:15:26.102 "thread": "nvmf_tgt_poll_group_000", 00:15:26.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:26.102 "listen_address": { 00:15:26.102 "trtype": "TCP", 00:15:26.102 "adrfam": "IPv4", 00:15:26.102 "traddr": "10.0.0.2", 00:15:26.102 "trsvcid": "4420" 00:15:26.102 }, 00:15:26.102 "peer_address": { 00:15:26.102 "trtype": "TCP", 00:15:26.102 "adrfam": "IPv4", 00:15:26.102 "traddr": "10.0.0.1", 00:15:26.102 "trsvcid": "50286" 00:15:26.102 }, 00:15:26.102 "auth": { 00:15:26.102 "state": "completed", 00:15:26.102 "digest": "sha384", 00:15:26.102 "dhgroup": "ffdhe8192" 00:15:26.102 } 00:15:26.102 } 00:15:26.102 ]' 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.102 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.361 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.361 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.361 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.361 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.361 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.361 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:26.361 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.929 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:27.187 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.188 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.755 00:15:27.755 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.755 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.755 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.014 { 00:15:28.014 "cntlid": 93, 00:15:28.014 "qid": 0, 00:15:28.014 "state": "enabled", 00:15:28.014 "thread": "nvmf_tgt_poll_group_000", 00:15:28.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:28.014 "listen_address": { 00:15:28.014 "trtype": "TCP", 00:15:28.014 "adrfam": "IPv4", 00:15:28.014 "traddr": "10.0.0.2", 00:15:28.014 "trsvcid": "4420" 00:15:28.014 }, 00:15:28.014 "peer_address": { 00:15:28.014 "trtype": "TCP", 00:15:28.014 "adrfam": "IPv4", 00:15:28.014 "traddr": "10.0.0.1", 00:15:28.014 "trsvcid": "59484" 00:15:28.014 }, 00:15:28.014 "auth": { 00:15:28.014 "state": "completed", 00:15:28.014 "digest": "sha384", 00:15:28.014 "dhgroup": "ffdhe8192" 00:15:28.014 } 00:15:28.014 } 00:15:28.014 ]' 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.014 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.273 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:28.273 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.841 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.101 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.669 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.669 { 00:15:29.669 "cntlid": 95, 00:15:29.669 "qid": 0, 00:15:29.669 "state": "enabled", 00:15:29.669 "thread": "nvmf_tgt_poll_group_000", 00:15:29.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:29.669 "listen_address": { 00:15:29.669 "trtype": "TCP", 00:15:29.669 "adrfam": "IPv4", 00:15:29.669 "traddr": "10.0.0.2", 00:15:29.669 "trsvcid": "4420" 00:15:29.669 }, 00:15:29.669 "peer_address": { 00:15:29.669 "trtype": "TCP", 00:15:29.669 "adrfam": "IPv4", 00:15:29.669 "traddr": "10.0.0.1", 00:15:29.669 "trsvcid": "59514" 00:15:29.669 }, 00:15:29.669 "auth": { 00:15:29.669 "state": "completed", 00:15:29.669 "digest": "sha384", 00:15:29.669 "dhgroup": "ffdhe8192" 00:15:29.669 } 00:15:29.669 } 00:15:29.669 ]' 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.669 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.928 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:29.929 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:30.495 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.495 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:30.495 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.495 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.496 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.755 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.017 00:15:31.017 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.017 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.017 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.276 { 00:15:31.276 "cntlid": 97, 00:15:31.276 "qid": 0, 00:15:31.276 "state": "enabled", 00:15:31.276 "thread": "nvmf_tgt_poll_group_000", 00:15:31.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:31.276 "listen_address": { 00:15:31.276 "trtype": "TCP", 00:15:31.276 "adrfam": "IPv4", 00:15:31.276 "traddr": "10.0.0.2", 00:15:31.276 "trsvcid": "4420" 00:15:31.276 }, 00:15:31.276 "peer_address": { 00:15:31.276 "trtype": "TCP", 00:15:31.276 "adrfam": "IPv4", 00:15:31.276 "traddr": "10.0.0.1", 00:15:31.276 "trsvcid": "59534" 00:15:31.276 }, 00:15:31.276 "auth": { 00:15:31.276 "state": "completed", 00:15:31.276 "digest": "sha512", 00:15:31.276 "dhgroup": "null" 00:15:31.276 } 00:15:31.276 } 00:15:31.276 ]' 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.276 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.276 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.535 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:31.535 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.103 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.362 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.622 00:15:32.622 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.622 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.622 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.881 { 00:15:32.881 "cntlid": 99, 00:15:32.881 "qid": 0, 00:15:32.881 "state": "enabled", 00:15:32.881 "thread": "nvmf_tgt_poll_group_000", 00:15:32.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:32.881 "listen_address": { 00:15:32.881 "trtype": "TCP", 00:15:32.881 "adrfam": "IPv4", 00:15:32.881 "traddr": "10.0.0.2", 00:15:32.881 "trsvcid": "4420" 00:15:32.881 }, 00:15:32.881 "peer_address": { 00:15:32.881 "trtype": "TCP", 00:15:32.881 "adrfam": "IPv4", 00:15:32.881 "traddr": "10.0.0.1", 00:15:32.881 "trsvcid": "59550" 00:15:32.881 }, 00:15:32.881 "auth": { 00:15:32.881 "state": "completed", 00:15:32.881 "digest": "sha512", 00:15:32.881 "dhgroup": "null" 00:15:32.881 } 00:15:32.881 } 00:15:32.881 ]' 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.881 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.140 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:33.140 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.708 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.967 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.967 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.226 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.226 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.226 { 00:15:34.226 "cntlid": 101, 00:15:34.226 "qid": 0, 00:15:34.226 "state": "enabled", 00:15:34.226 "thread": "nvmf_tgt_poll_group_000", 00:15:34.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:34.226 "listen_address": { 00:15:34.226 "trtype": "TCP", 00:15:34.226 "adrfam": "IPv4", 00:15:34.226 "traddr": "10.0.0.2", 00:15:34.226 "trsvcid": "4420" 00:15:34.226 }, 00:15:34.226 "peer_address": { 00:15:34.226 "trtype": "TCP", 00:15:34.226 "adrfam": "IPv4", 00:15:34.226 "traddr": "10.0.0.1", 00:15:34.226 "trsvcid": "59596" 00:15:34.226 }, 00:15:34.226 "auth": { 00:15:34.226 "state": "completed", 00:15:34.226 "digest": "sha512", 00:15:34.226 "dhgroup": "null" 00:15:34.226 } 00:15:34.226 } 00:15:34.226 ]' 00:15:34.226 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.226 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.485 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.743 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:34.743 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.313 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.314 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.314 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.572 00:15:35.572 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.572 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.572 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.832 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.833 { 00:15:35.833 "cntlid": 103, 00:15:35.833 "qid": 0, 00:15:35.833 "state": "enabled", 00:15:35.833 "thread": "nvmf_tgt_poll_group_000", 00:15:35.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:35.833 "listen_address": { 00:15:35.833 "trtype": "TCP", 00:15:35.833 "adrfam": "IPv4", 00:15:35.833 "traddr": "10.0.0.2", 00:15:35.833 "trsvcid": "4420" 00:15:35.833 }, 00:15:35.833 "peer_address": { 00:15:35.833 "trtype": "TCP", 00:15:35.833 "adrfam": "IPv4", 00:15:35.833 "traddr": "10.0.0.1", 00:15:35.833 "trsvcid": "59632" 00:15:35.833 }, 00:15:35.833 "auth": { 00:15:35.833 "state": "completed", 00:15:35.833 "digest": "sha512", 00:15:35.833 "dhgroup": "null" 00:15:35.833 } 00:15:35.833 } 00:15:35.833 ]' 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:35.833 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.092 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.092 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.092 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.092 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:36.092 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.724 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.042 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.367 00:15:37.367 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.367 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.367 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.367 { 00:15:37.367 "cntlid": 105, 00:15:37.367 "qid": 0, 00:15:37.367 "state": "enabled", 00:15:37.367 "thread": "nvmf_tgt_poll_group_000", 00:15:37.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:37.367 "listen_address": { 00:15:37.367 "trtype": "TCP", 00:15:37.367 "adrfam": "IPv4", 00:15:37.367 "traddr": "10.0.0.2", 00:15:37.367 "trsvcid": "4420" 00:15:37.367 }, 00:15:37.367 "peer_address": { 00:15:37.367 "trtype": "TCP", 00:15:37.367 "adrfam": "IPv4", 00:15:37.367 "traddr": "10.0.0.1", 00:15:37.367 "trsvcid": "36038" 00:15:37.367 }, 00:15:37.367 "auth": { 00:15:37.367 "state": "completed", 00:15:37.367 "digest": "sha512", 00:15:37.367 "dhgroup": "ffdhe2048" 00:15:37.367 } 00:15:37.367 } 00:15:37.367 ]' 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.367 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:37.674 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:38.240 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.240 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.240 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.240 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.240 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.241 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.241 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.241 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.499 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.757 00:15:38.757 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.757 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.757 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.015 { 00:15:39.015 "cntlid": 107, 00:15:39.015 "qid": 0, 00:15:39.015 "state": "enabled", 00:15:39.015 "thread": "nvmf_tgt_poll_group_000", 00:15:39.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:39.015 "listen_address": { 00:15:39.015 "trtype": "TCP", 00:15:39.015 "adrfam": "IPv4", 00:15:39.015 "traddr": "10.0.0.2", 00:15:39.015 "trsvcid": "4420" 00:15:39.015 }, 00:15:39.015 "peer_address": { 00:15:39.015 "trtype": "TCP", 00:15:39.015 "adrfam": "IPv4", 00:15:39.015 "traddr": "10.0.0.1", 00:15:39.015 "trsvcid": "36082" 00:15:39.015 }, 00:15:39.015 "auth": { 00:15:39.015 "state": "completed", 00:15:39.015 "digest": "sha512", 00:15:39.015 "dhgroup": "ffdhe2048" 00:15:39.015 } 00:15:39.015 } 00:15:39.015 ]' 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.015 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.274 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:39.274 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.839 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.098 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.357 00:15:40.357 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.357 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.357 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.616 { 00:15:40.616 "cntlid": 109, 00:15:40.616 "qid": 0, 00:15:40.616 "state": "enabled", 00:15:40.616 "thread": "nvmf_tgt_poll_group_000", 00:15:40.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:40.616 "listen_address": { 00:15:40.616 "trtype": "TCP", 00:15:40.616 "adrfam": "IPv4", 00:15:40.616 "traddr": "10.0.0.2", 00:15:40.616 "trsvcid": "4420" 00:15:40.616 }, 00:15:40.616 "peer_address": { 00:15:40.616 "trtype": "TCP", 00:15:40.616 "adrfam": "IPv4", 00:15:40.616 "traddr": "10.0.0.1", 00:15:40.616 "trsvcid": "36096" 00:15:40.616 }, 00:15:40.616 "auth": { 00:15:40.616 "state": "completed", 00:15:40.616 "digest": "sha512", 00:15:40.616 "dhgroup": "ffdhe2048" 00:15:40.616 } 00:15:40.616 } 00:15:40.616 ]' 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.616 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.874 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:40.874 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:41.441 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.700 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.959 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.218 { 00:15:42.218 "cntlid": 111, 00:15:42.218 "qid": 0, 00:15:42.218 "state": "enabled", 00:15:42.218 "thread": "nvmf_tgt_poll_group_000", 00:15:42.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:42.218 "listen_address": { 00:15:42.218 "trtype": "TCP", 00:15:42.218 "adrfam": "IPv4", 00:15:42.218 "traddr": "10.0.0.2", 00:15:42.218 "trsvcid": "4420" 00:15:42.218 }, 00:15:42.218 "peer_address": { 00:15:42.218 "trtype": "TCP", 00:15:42.218 "adrfam": "IPv4", 00:15:42.218 "traddr": "10.0.0.1", 00:15:42.218 "trsvcid": "36138" 00:15:42.218 }, 00:15:42.218 "auth": { 00:15:42.218 "state": "completed", 00:15:42.218 "digest": "sha512", 00:15:42.218 "dhgroup": "ffdhe2048" 00:15:42.218 } 00:15:42.218 } 00:15:42.218 ]' 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.218 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.477 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:42.477 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.045 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.303 00:15:43.303 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.303 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.303 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.561 { 00:15:43.561 "cntlid": 113, 00:15:43.561 "qid": 0, 00:15:43.561 "state": "enabled", 00:15:43.561 "thread": "nvmf_tgt_poll_group_000", 00:15:43.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:43.561 "listen_address": { 00:15:43.561 "trtype": "TCP", 00:15:43.561 "adrfam": "IPv4", 00:15:43.561 "traddr": "10.0.0.2", 00:15:43.561 "trsvcid": "4420" 00:15:43.561 }, 00:15:43.561 "peer_address": { 00:15:43.561 "trtype": "TCP", 00:15:43.561 "adrfam": "IPv4", 00:15:43.561 "traddr": "10.0.0.1", 00:15:43.561 "trsvcid": "36178" 00:15:43.561 }, 00:15:43.561 "auth": { 00:15:43.561 "state": "completed", 00:15:43.561 "digest": "sha512", 00:15:43.561 "dhgroup": "ffdhe3072" 00:15:43.561 } 00:15:43.561 } 00:15:43.561 ]' 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.561 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.820 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.820 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.820 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.820 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:43.820 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.386 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.645 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.903 00:15:44.903 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.903 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.903 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.162 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.162 { 00:15:45.162 "cntlid": 115, 00:15:45.162 "qid": 0, 00:15:45.162 "state": "enabled", 00:15:45.163 "thread": "nvmf_tgt_poll_group_000", 00:15:45.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:45.163 "listen_address": { 00:15:45.163 "trtype": "TCP", 00:15:45.163 "adrfam": "IPv4", 00:15:45.163 "traddr": "10.0.0.2", 00:15:45.163 "trsvcid": "4420" 00:15:45.163 }, 00:15:45.163 "peer_address": { 00:15:45.163 "trtype": "TCP", 00:15:45.163 "adrfam": "IPv4", 00:15:45.163 "traddr": "10.0.0.1", 00:15:45.163 "trsvcid": "36206" 00:15:45.163 }, 00:15:45.163 "auth": { 00:15:45.163 "state": "completed", 00:15:45.163 "digest": "sha512", 00:15:45.163 "dhgroup": "ffdhe3072" 00:15:45.163 } 00:15:45.163 } 00:15:45.163 ]' 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.163 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.422 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:45.422 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.989 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.248 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.506 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.506 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.764 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.764 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.764 { 00:15:46.764 "cntlid": 117, 00:15:46.764 "qid": 0, 00:15:46.764 "state": "enabled", 00:15:46.764 "thread": "nvmf_tgt_poll_group_000", 00:15:46.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.765 "listen_address": { 00:15:46.765 "trtype": "TCP", 00:15:46.765 "adrfam": "IPv4", 00:15:46.765 "traddr": "10.0.0.2", 00:15:46.765 "trsvcid": "4420" 00:15:46.765 }, 00:15:46.765 "peer_address": { 00:15:46.765 "trtype": "TCP", 00:15:46.765 "adrfam": "IPv4", 00:15:46.765 "traddr": "10.0.0.1", 00:15:46.765 "trsvcid": "36242" 00:15:46.765 }, 00:15:46.765 "auth": { 00:15:46.765 "state": "completed", 00:15:46.765 "digest": "sha512", 00:15:46.765 "dhgroup": "ffdhe3072" 00:15:46.765 } 00:15:46.765 } 00:15:46.765 ]' 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.765 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.023 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:47.023 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.589 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.847 00:15:47.847 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.847 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.847 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.106 { 00:15:48.106 "cntlid": 119, 00:15:48.106 "qid": 0, 00:15:48.106 "state": "enabled", 00:15:48.106 "thread": "nvmf_tgt_poll_group_000", 00:15:48.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:48.106 "listen_address": { 00:15:48.106 "trtype": "TCP", 00:15:48.106 "adrfam": "IPv4", 00:15:48.106 "traddr": "10.0.0.2", 00:15:48.106 "trsvcid": "4420" 00:15:48.106 }, 00:15:48.106 "peer_address": { 00:15:48.106 "trtype": "TCP", 00:15:48.106 "adrfam": "IPv4", 00:15:48.106 "traddr": "10.0.0.1", 00:15:48.106 "trsvcid": "45086" 00:15:48.106 }, 00:15:48.106 "auth": { 00:15:48.106 "state": "completed", 00:15:48.106 "digest": "sha512", 00:15:48.106 "dhgroup": "ffdhe3072" 00:15:48.106 } 00:15:48.106 } 00:15:48.106 ]' 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.106 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.364 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.364 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.364 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.364 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.364 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.364 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:48.364 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:48.931 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.189 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.447 00:15:49.447 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.447 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.448 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.706 { 00:15:49.706 "cntlid": 121, 00:15:49.706 "qid": 0, 00:15:49.706 "state": "enabled", 00:15:49.706 "thread": "nvmf_tgt_poll_group_000", 00:15:49.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:49.706 "listen_address": { 00:15:49.706 "trtype": "TCP", 00:15:49.706 "adrfam": "IPv4", 00:15:49.706 "traddr": "10.0.0.2", 00:15:49.706 "trsvcid": "4420" 00:15:49.706 }, 00:15:49.706 "peer_address": { 00:15:49.706 "trtype": "TCP", 00:15:49.706 "adrfam": "IPv4", 00:15:49.706 "traddr": "10.0.0.1", 00:15:49.706 "trsvcid": "45114" 00:15:49.706 }, 00:15:49.706 "auth": { 00:15:49.706 "state": "completed", 00:15:49.706 "digest": "sha512", 00:15:49.706 "dhgroup": "ffdhe4096" 00:15:49.706 } 00:15:49.706 } 00:15:49.706 ]' 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.706 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.965 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.965 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.965 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.965 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:49.965 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.529 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.787 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:50.787 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.787 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.787 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.787 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.788 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.046 00:15:51.046 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.046 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.046 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.304 { 00:15:51.304 "cntlid": 123, 00:15:51.304 "qid": 0, 00:15:51.304 "state": "enabled", 00:15:51.304 "thread": "nvmf_tgt_poll_group_000", 00:15:51.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:51.304 "listen_address": { 00:15:51.304 "trtype": "TCP", 00:15:51.304 "adrfam": "IPv4", 00:15:51.304 "traddr": "10.0.0.2", 00:15:51.304 "trsvcid": "4420" 00:15:51.304 }, 00:15:51.304 "peer_address": { 00:15:51.304 "trtype": "TCP", 00:15:51.304 "adrfam": "IPv4", 00:15:51.304 "traddr": "10.0.0.1", 00:15:51.304 "trsvcid": "45142" 00:15:51.304 }, 00:15:51.304 "auth": { 00:15:51.304 "state": "completed", 00:15:51.304 "digest": "sha512", 00:15:51.304 "dhgroup": "ffdhe4096" 00:15:51.304 } 00:15:51.304 } 00:15:51.304 ]' 00:15:51.304 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.304 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.563 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:51.563 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.130 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.388 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:52.388 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.388 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.388 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.389 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.646 00:15:52.646 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.647 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.647 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.905 { 00:15:52.905 "cntlid": 125, 00:15:52.905 "qid": 0, 00:15:52.905 "state": "enabled", 00:15:52.905 "thread": "nvmf_tgt_poll_group_000", 00:15:52.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:52.905 "listen_address": { 00:15:52.905 "trtype": "TCP", 00:15:52.905 "adrfam": "IPv4", 00:15:52.905 "traddr": "10.0.0.2", 00:15:52.905 "trsvcid": "4420" 00:15:52.905 }, 00:15:52.905 "peer_address": { 00:15:52.905 "trtype": "TCP", 00:15:52.905 "adrfam": "IPv4", 00:15:52.905 "traddr": "10.0.0.1", 00:15:52.905 "trsvcid": "45176" 00:15:52.905 }, 00:15:52.905 "auth": { 00:15:52.905 "state": "completed", 00:15:52.905 "digest": "sha512", 00:15:52.905 "dhgroup": "ffdhe4096" 00:15:52.905 } 00:15:52.905 } 00:15:52.905 ]' 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.905 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.163 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:53.163 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.730 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.988 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.247 00:15:54.247 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.247 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.247 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.506 { 00:15:54.506 "cntlid": 127, 00:15:54.506 "qid": 0, 00:15:54.506 "state": "enabled", 00:15:54.506 "thread": "nvmf_tgt_poll_group_000", 00:15:54.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:54.506 "listen_address": { 00:15:54.506 "trtype": "TCP", 00:15:54.506 "adrfam": "IPv4", 00:15:54.506 "traddr": "10.0.0.2", 00:15:54.506 "trsvcid": "4420" 00:15:54.506 }, 00:15:54.506 "peer_address": { 00:15:54.506 "trtype": "TCP", 00:15:54.506 "adrfam": "IPv4", 00:15:54.506 "traddr": "10.0.0.1", 00:15:54.506 "trsvcid": "45208" 00:15:54.506 }, 00:15:54.506 "auth": { 00:15:54.506 "state": "completed", 00:15:54.506 "digest": "sha512", 00:15:54.506 "dhgroup": "ffdhe4096" 00:15:54.506 } 00:15:54.506 } 00:15:54.506 ]' 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.506 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.764 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:54.764 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:55.331 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.589 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.847 00:15:55.847 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.847 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.847 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.105 { 00:15:56.105 "cntlid": 129, 00:15:56.105 "qid": 0, 00:15:56.105 "state": "enabled", 00:15:56.105 "thread": "nvmf_tgt_poll_group_000", 00:15:56.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:56.105 "listen_address": { 00:15:56.105 "trtype": "TCP", 00:15:56.105 "adrfam": "IPv4", 00:15:56.105 "traddr": "10.0.0.2", 00:15:56.105 "trsvcid": "4420" 00:15:56.105 }, 00:15:56.105 "peer_address": { 00:15:56.105 "trtype": "TCP", 00:15:56.105 "adrfam": "IPv4", 00:15:56.105 "traddr": "10.0.0.1", 00:15:56.105 "trsvcid": "45230" 00:15:56.105 }, 00:15:56.105 "auth": { 00:15:56.105 "state": "completed", 00:15:56.105 "digest": "sha512", 00:15:56.105 "dhgroup": "ffdhe6144" 00:15:56.105 } 00:15:56.105 } 00:15:56.105 ]' 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.105 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.364 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:56.364 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:56.930 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.188 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.446 00:15:57.446 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.446 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.446 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.705 { 00:15:57.705 "cntlid": 131, 00:15:57.705 "qid": 0, 00:15:57.705 "state": "enabled", 00:15:57.705 "thread": "nvmf_tgt_poll_group_000", 00:15:57.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:57.705 "listen_address": { 00:15:57.705 "trtype": "TCP", 00:15:57.705 "adrfam": "IPv4", 00:15:57.705 "traddr": "10.0.0.2", 00:15:57.705 "trsvcid": "4420" 00:15:57.705 }, 00:15:57.705 "peer_address": { 00:15:57.705 "trtype": "TCP", 00:15:57.705 "adrfam": "IPv4", 00:15:57.705 "traddr": "10.0.0.1", 00:15:57.705 "trsvcid": "59372" 00:15:57.705 }, 00:15:57.705 "auth": { 00:15:57.705 "state": "completed", 00:15:57.705 "digest": "sha512", 00:15:57.705 "dhgroup": "ffdhe6144" 00:15:57.705 } 00:15:57.705 } 00:15:57.705 ]' 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.705 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.962 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:57.962 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.528 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.785 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.786 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.044 00:15:59.044 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.044 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.044 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.303 { 00:15:59.303 "cntlid": 133, 00:15:59.303 "qid": 0, 00:15:59.303 "state": "enabled", 00:15:59.303 "thread": "nvmf_tgt_poll_group_000", 00:15:59.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:59.303 "listen_address": { 00:15:59.303 "trtype": "TCP", 00:15:59.303 "adrfam": "IPv4", 00:15:59.303 "traddr": "10.0.0.2", 00:15:59.303 "trsvcid": "4420" 00:15:59.303 }, 00:15:59.303 "peer_address": { 00:15:59.303 "trtype": "TCP", 00:15:59.303 "adrfam": "IPv4", 00:15:59.303 "traddr": "10.0.0.1", 00:15:59.303 "trsvcid": "59416" 00:15:59.303 }, 00:15:59.303 "auth": { 00:15:59.303 "state": "completed", 00:15:59.303 "digest": "sha512", 00:15:59.303 "dhgroup": "ffdhe6144" 00:15:59.303 } 00:15:59.303 } 00:15:59.303 ]' 00:15:59.303 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.303 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.561 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:15:59.561 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.128 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.386 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.644 00:16:00.645 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.645 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.645 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.903 { 00:16:00.903 "cntlid": 135, 00:16:00.903 "qid": 0, 00:16:00.903 "state": "enabled", 00:16:00.903 "thread": "nvmf_tgt_poll_group_000", 00:16:00.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:00.903 "listen_address": { 00:16:00.903 "trtype": "TCP", 00:16:00.903 "adrfam": "IPv4", 00:16:00.903 "traddr": "10.0.0.2", 00:16:00.903 "trsvcid": "4420" 00:16:00.903 }, 00:16:00.903 "peer_address": { 00:16:00.903 "trtype": "TCP", 00:16:00.903 "adrfam": "IPv4", 00:16:00.903 "traddr": "10.0.0.1", 00:16:00.903 "trsvcid": "59436" 00:16:00.903 }, 00:16:00.903 "auth": { 00:16:00.903 "state": "completed", 00:16:00.903 "digest": "sha512", 00:16:00.903 "dhgroup": "ffdhe6144" 00:16:00.903 } 00:16:00.903 } 00:16:00.903 ]' 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.903 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.161 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:01.161 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:01.729 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.988 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.555 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.555 { 00:16:02.555 "cntlid": 137, 00:16:02.555 "qid": 0, 00:16:02.555 "state": "enabled", 00:16:02.555 "thread": "nvmf_tgt_poll_group_000", 00:16:02.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:02.555 "listen_address": { 00:16:02.555 "trtype": "TCP", 00:16:02.555 "adrfam": "IPv4", 00:16:02.555 "traddr": "10.0.0.2", 00:16:02.555 "trsvcid": "4420" 00:16:02.555 }, 00:16:02.555 "peer_address": { 00:16:02.555 "trtype": "TCP", 00:16:02.555 "adrfam": "IPv4", 00:16:02.555 "traddr": "10.0.0.1", 00:16:02.555 "trsvcid": "59450" 00:16:02.555 }, 00:16:02.555 "auth": { 00:16:02.555 "state": "completed", 00:16:02.555 "digest": "sha512", 00:16:02.555 "dhgroup": "ffdhe8192" 00:16:02.555 } 00:16:02.555 } 00:16:02.555 ]' 00:16:02.555 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.812 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.070 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:16:03.070 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.637 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.985 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.985 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.985 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.985 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.272 00:16:04.272 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.272 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.272 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.531 { 00:16:04.531 "cntlid": 139, 00:16:04.531 "qid": 0, 00:16:04.531 "state": "enabled", 00:16:04.531 "thread": "nvmf_tgt_poll_group_000", 00:16:04.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:04.531 "listen_address": { 00:16:04.531 "trtype": "TCP", 00:16:04.531 "adrfam": "IPv4", 00:16:04.531 "traddr": "10.0.0.2", 00:16:04.531 "trsvcid": "4420" 00:16:04.531 }, 00:16:04.531 "peer_address": { 00:16:04.531 "trtype": "TCP", 00:16:04.531 "adrfam": "IPv4", 00:16:04.531 "traddr": "10.0.0.1", 00:16:04.531 "trsvcid": "59466" 00:16:04.531 }, 00:16:04.531 "auth": { 00:16:04.531 "state": "completed", 00:16:04.531 "digest": "sha512", 00:16:04.531 "dhgroup": "ffdhe8192" 00:16:04.531 } 00:16:04.531 } 00:16:04.531 ]' 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.531 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.532 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.790 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:16:04.790 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: --dhchap-ctrl-secret DHHC-1:02:ZWMwMDE2ZmQ2Zjg1NzUwNWU4ZjYzNGNlMGM0ZmUxMzExMmJhMDU0YTc1OTQxYmQzANNVjQ==: 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.358 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.616 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:05.616 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.616 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.617 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.184 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.184 { 00:16:06.184 "cntlid": 141, 00:16:06.184 "qid": 0, 00:16:06.184 "state": "enabled", 00:16:06.184 "thread": "nvmf_tgt_poll_group_000", 00:16:06.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.184 "listen_address": { 00:16:06.184 "trtype": "TCP", 00:16:06.184 "adrfam": "IPv4", 00:16:06.184 "traddr": "10.0.0.2", 00:16:06.184 "trsvcid": "4420" 00:16:06.184 }, 00:16:06.184 "peer_address": { 00:16:06.184 "trtype": "TCP", 00:16:06.184 "adrfam": "IPv4", 00:16:06.184 "traddr": "10.0.0.1", 00:16:06.184 "trsvcid": "59508" 00:16:06.184 }, 00:16:06.184 "auth": { 00:16:06.184 "state": "completed", 00:16:06.184 "digest": "sha512", 00:16:06.184 "dhgroup": "ffdhe8192" 00:16:06.184 } 00:16:06.184 } 00:16:06.184 ]' 00:16:06.184 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.443 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.703 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:16:06.703 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:01:MDU3ZGMyYWQ2ZDIzNmEyY2FiZjMwMjJkMzhjZTIxMWP2Mj+Z: 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.271 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.271 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.838 00:16:07.838 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.838 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.838 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.097 { 00:16:08.097 "cntlid": 143, 00:16:08.097 "qid": 0, 00:16:08.097 "state": "enabled", 00:16:08.097 "thread": "nvmf_tgt_poll_group_000", 00:16:08.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:08.097 "listen_address": { 00:16:08.097 "trtype": "TCP", 00:16:08.097 "adrfam": "IPv4", 00:16:08.097 "traddr": "10.0.0.2", 00:16:08.097 "trsvcid": "4420" 00:16:08.097 }, 00:16:08.097 "peer_address": { 00:16:08.097 "trtype": "TCP", 00:16:08.097 "adrfam": "IPv4", 00:16:08.097 "traddr": "10.0.0.1", 00:16:08.097 "trsvcid": "46718" 00:16:08.097 }, 00:16:08.097 "auth": { 00:16:08.097 "state": "completed", 00:16:08.097 "digest": "sha512", 00:16:08.097 "dhgroup": "ffdhe8192" 00:16:08.097 } 00:16:08.097 } 00:16:08.097 ]' 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.097 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.098 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.098 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.356 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:08.356 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.924 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:09.183 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.184 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.751 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.751 { 00:16:09.751 "cntlid": 145, 00:16:09.751 "qid": 0, 00:16:09.751 "state": "enabled", 00:16:09.751 "thread": "nvmf_tgt_poll_group_000", 00:16:09.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.751 "listen_address": { 00:16:09.751 "trtype": "TCP", 00:16:09.751 "adrfam": "IPv4", 00:16:09.751 "traddr": "10.0.0.2", 00:16:09.751 "trsvcid": "4420" 00:16:09.751 }, 00:16:09.751 "peer_address": { 00:16:09.751 "trtype": "TCP", 00:16:09.751 "adrfam": "IPv4", 00:16:09.751 "traddr": "10.0.0.1", 00:16:09.751 "trsvcid": "46754" 00:16:09.751 }, 00:16:09.751 "auth": { 00:16:09.751 "state": "completed", 00:16:09.751 "digest": "sha512", 00:16:09.751 "dhgroup": "ffdhe8192" 00:16:09.751 } 00:16:09.751 } 00:16:09.751 ]' 00:16:09.751 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.008 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.267 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:16:10.267 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Mzc5MDg1MmYxNjdlMTVhNDhjOGVhMjI3MTEyZTVkNWRlMjVmYmNlZTQzYmUyNzQwsyTwLg==: --dhchap-ctrl-secret DHHC-1:03:MTVlN2I4MmM5NzYzOTVjMDg4NTk1MTQ3MDIwNDRjMjQyNDhhNjZkMDY1ZGEzMTUzZTA0MjZmZGRiYmExNmNhZYTIWmw=: 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:10.834 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:10.835 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:11.094 request: 00:16:11.094 { 00:16:11.094 "name": "nvme0", 00:16:11.094 "trtype": "tcp", 00:16:11.094 "traddr": "10.0.0.2", 00:16:11.094 "adrfam": "ipv4", 00:16:11.094 "trsvcid": "4420", 00:16:11.094 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:11.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:11.094 "prchk_reftag": false, 00:16:11.094 "prchk_guard": false, 00:16:11.094 "hdgst": false, 00:16:11.094 "ddgst": false, 00:16:11.094 "dhchap_key": "key2", 00:16:11.094 "allow_unrecognized_csi": false, 00:16:11.094 "method": "bdev_nvme_attach_controller", 00:16:11.094 "req_id": 1 00:16:11.094 } 00:16:11.094 Got JSON-RPC error response 00:16:11.094 response: 00:16:11.094 { 00:16:11.094 "code": -5, 00:16:11.094 "message": "Input/output error" 00:16:11.094 } 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.094 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.353 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.613 request: 00:16:11.613 { 00:16:11.613 "name": "nvme0", 00:16:11.613 "trtype": "tcp", 00:16:11.613 "traddr": "10.0.0.2", 00:16:11.613 "adrfam": "ipv4", 00:16:11.613 "trsvcid": "4420", 00:16:11.613 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:11.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:11.613 "prchk_reftag": false, 00:16:11.613 "prchk_guard": false, 00:16:11.613 "hdgst": false, 00:16:11.613 "ddgst": false, 00:16:11.613 "dhchap_key": "key1", 00:16:11.613 "dhchap_ctrlr_key": "ckey2", 00:16:11.613 "allow_unrecognized_csi": false, 00:16:11.613 "method": "bdev_nvme_attach_controller", 00:16:11.613 "req_id": 1 00:16:11.613 } 00:16:11.613 Got JSON-RPC error response 00:16:11.613 response: 00:16:11.613 { 00:16:11.613 "code": -5, 00:16:11.613 "message": "Input/output error" 00:16:11.613 } 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.613 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.181 request: 00:16:12.181 { 00:16:12.181 "name": "nvme0", 00:16:12.181 "trtype": "tcp", 00:16:12.181 "traddr": "10.0.0.2", 00:16:12.181 "adrfam": "ipv4", 00:16:12.181 "trsvcid": "4420", 00:16:12.181 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:12.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.181 "prchk_reftag": false, 00:16:12.181 "prchk_guard": false, 00:16:12.181 "hdgst": false, 00:16:12.181 "ddgst": false, 00:16:12.181 "dhchap_key": "key1", 00:16:12.181 "dhchap_ctrlr_key": "ckey1", 00:16:12.181 "allow_unrecognized_csi": false, 00:16:12.181 "method": "bdev_nvme_attach_controller", 00:16:12.181 "req_id": 1 00:16:12.181 } 00:16:12.181 Got JSON-RPC error response 00:16:12.181 response: 00:16:12.181 { 00:16:12.181 "code": -5, 00:16:12.181 "message": "Input/output error" 00:16:12.181 } 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2793204 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2793204 ']' 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2793204 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793204 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793204' 00:16:12.181 killing process with pid 2793204 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2793204 00:16:12.181 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2793204 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2814767 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2814767 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2814767 ']' 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2814767 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2814767 ']' 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.441 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.700 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.700 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:12.700 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:12.700 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.700 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 null0 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EtN 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.v4S ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.v4S 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zde 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.S2T ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.S2T 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jeF 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XWm ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XWm 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vzb 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:12.959 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.960 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.893 nvme0n1 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.893 { 00:16:13.893 "cntlid": 1, 00:16:13.893 "qid": 0, 00:16:13.893 "state": "enabled", 00:16:13.893 "thread": "nvmf_tgt_poll_group_000", 00:16:13.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.893 "listen_address": { 00:16:13.893 "trtype": "TCP", 00:16:13.893 "adrfam": "IPv4", 00:16:13.893 "traddr": "10.0.0.2", 00:16:13.893 "trsvcid": "4420" 00:16:13.893 }, 00:16:13.893 "peer_address": { 00:16:13.893 "trtype": "TCP", 00:16:13.893 "adrfam": "IPv4", 00:16:13.893 "traddr": "10.0.0.1", 00:16:13.893 "trsvcid": "46796" 00:16:13.893 }, 00:16:13.893 "auth": { 00:16:13.893 "state": "completed", 00:16:13.893 "digest": "sha512", 00:16:13.893 "dhgroup": "ffdhe8192" 00:16:13.893 } 00:16:13.893 } 00:16:13.893 ]' 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.893 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.151 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.151 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.151 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.151 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:14.151 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:14.716 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.974 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.232 request: 00:16:15.232 { 00:16:15.232 "name": "nvme0", 00:16:15.232 "trtype": "tcp", 00:16:15.232 "traddr": "10.0.0.2", 00:16:15.232 "adrfam": "ipv4", 00:16:15.232 "trsvcid": "4420", 00:16:15.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:15.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.232 "prchk_reftag": false, 00:16:15.232 "prchk_guard": false, 00:16:15.232 "hdgst": false, 00:16:15.232 "ddgst": false, 00:16:15.232 "dhchap_key": "key3", 00:16:15.232 "allow_unrecognized_csi": false, 00:16:15.232 "method": "bdev_nvme_attach_controller", 00:16:15.232 "req_id": 1 00:16:15.232 } 00:16:15.232 Got JSON-RPC error response 00:16:15.232 response: 00:16:15.232 { 00:16:15.232 "code": -5, 00:16:15.232 "message": "Input/output error" 00:16:15.232 } 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:15.232 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.490 request: 00:16:15.490 { 00:16:15.490 "name": "nvme0", 00:16:15.490 "trtype": "tcp", 00:16:15.490 "traddr": "10.0.0.2", 00:16:15.490 "adrfam": "ipv4", 00:16:15.490 "trsvcid": "4420", 00:16:15.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:15.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.490 "prchk_reftag": false, 00:16:15.490 "prchk_guard": false, 00:16:15.490 "hdgst": false, 00:16:15.490 "ddgst": false, 00:16:15.490 "dhchap_key": "key3", 00:16:15.490 "allow_unrecognized_csi": false, 00:16:15.490 "method": "bdev_nvme_attach_controller", 00:16:15.490 "req_id": 1 00:16:15.490 } 00:16:15.490 Got JSON-RPC error response 00:16:15.490 response: 00:16:15.490 { 00:16:15.490 "code": -5, 00:16:15.490 "message": "Input/output error" 00:16:15.490 } 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:15.490 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:15.749 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:16.007 request: 00:16:16.007 { 00:16:16.007 "name": "nvme0", 00:16:16.007 "trtype": "tcp", 00:16:16.007 "traddr": "10.0.0.2", 00:16:16.007 "adrfam": "ipv4", 00:16:16.007 "trsvcid": "4420", 00:16:16.007 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:16.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:16.007 "prchk_reftag": false, 00:16:16.007 "prchk_guard": false, 00:16:16.007 "hdgst": false, 00:16:16.007 "ddgst": false, 00:16:16.007 "dhchap_key": "key0", 00:16:16.007 "dhchap_ctrlr_key": "key1", 00:16:16.007 "allow_unrecognized_csi": false, 00:16:16.007 "method": "bdev_nvme_attach_controller", 00:16:16.007 "req_id": 1 00:16:16.007 } 00:16:16.007 Got JSON-RPC error response 00:16:16.007 response: 00:16:16.007 { 00:16:16.007 "code": -5, 00:16:16.007 "message": "Input/output error" 00:16:16.007 } 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:16.007 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:16.008 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:16.265 nvme0n1 00:16:16.265 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:16.265 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:16.265 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.524 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.524 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.524 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:16.783 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:17.717 nvme0n1 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:17.717 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.975 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.975 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:17.975 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: --dhchap-ctrl-secret DHHC-1:03:MmIyMDM4YjU1NGU5OTYwMGQ4MzcxZDlkMmI3NjM3NjI5NzAzNzM1ZmUwYTM5OTdhYTY0ODM2NTc0NTU0OGFiNwhppdY=: 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.541 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:18.799 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:19.057 request: 00:16:19.057 { 00:16:19.057 "name": "nvme0", 00:16:19.057 "trtype": "tcp", 00:16:19.057 "traddr": "10.0.0.2", 00:16:19.057 "adrfam": "ipv4", 00:16:19.057 "trsvcid": "4420", 00:16:19.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:19.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:19.057 "prchk_reftag": false, 00:16:19.057 "prchk_guard": false, 00:16:19.057 "hdgst": false, 00:16:19.057 "ddgst": false, 00:16:19.057 "dhchap_key": "key1", 00:16:19.057 "allow_unrecognized_csi": false, 00:16:19.057 "method": "bdev_nvme_attach_controller", 00:16:19.057 "req_id": 1 00:16:19.057 } 00:16:19.057 Got JSON-RPC error response 00:16:19.057 response: 00:16:19.057 { 00:16:19.057 "code": -5, 00:16:19.057 "message": "Input/output error" 00:16:19.057 } 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:19.057 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:19.990 nvme0n1 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.990 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:20.249 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:20.508 nvme0n1 00:16:20.508 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:20.508 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.508 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: '' 2s 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: ]] 00:16:20.767 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzU5MTgyN2Q3NWNkNmFlZDA4Yjk5MjQ2N2U4YjE2YWSDbHPN: 00:16:21.025 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:21.025 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:21.025 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:22.929 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: 2s 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: ]] 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzVmMjFjMTM0ZGZiOGU3MTQ1OTNlZGU4Y2IxNTk3MWFlYWIzNjgyODk2MGM3YTViejK4BA==: 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:22.930 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:24.829 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:24.829 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:24.829 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:24.829 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:25.087 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:25.654 nvme0n1 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.654 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:26.220 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:26.220 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:26.220 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.478 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:26.736 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:27.303 request: 00:16:27.303 { 00:16:27.303 "name": "nvme0", 00:16:27.303 "dhchap_key": "key1", 00:16:27.303 "dhchap_ctrlr_key": "key3", 00:16:27.303 "method": "bdev_nvme_set_keys", 00:16:27.303 "req_id": 1 00:16:27.303 } 00:16:27.303 Got JSON-RPC error response 00:16:27.303 response: 00:16:27.303 { 00:16:27.303 "code": -13, 00:16:27.303 "message": "Permission denied" 00:16:27.303 } 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.303 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:27.562 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:27.562 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:28.496 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:28.496 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:28.496 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:28.755 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:29.322 nvme0n1 00:16:29.322 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:29.322 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.322 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.322 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:29.323 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:29.890 request: 00:16:29.890 { 00:16:29.890 "name": "nvme0", 00:16:29.890 "dhchap_key": "key2", 00:16:29.890 "dhchap_ctrlr_key": "key0", 00:16:29.890 "method": "bdev_nvme_set_keys", 00:16:29.890 "req_id": 1 00:16:29.890 } 00:16:29.890 Got JSON-RPC error response 00:16:29.890 response: 00:16:29.890 { 00:16:29.890 "code": -13, 00:16:29.890 "message": "Permission denied" 00:16:29.890 } 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.890 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:30.149 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:30.149 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:31.083 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:31.083 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:31.083 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2793231 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2793231 ']' 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2793231 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.341 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793231 00:16:31.342 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:31.342 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:31.342 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793231' 00:16:31.342 killing process with pid 2793231 00:16:31.342 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2793231 00:16:31.342 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2793231 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:31.599 rmmod nvme_tcp 00:16:31.599 rmmod nvme_fabrics 00:16:31.599 rmmod nvme_keyring 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:31.599 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2814767 ']' 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2814767 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2814767 ']' 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2814767 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814767 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814767' 00:16:31.600 killing process with pid 2814767 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2814767 00:16:31.600 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2814767 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.858 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.761 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:33.761 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.EtN /tmp/spdk.key-sha256.Zde /tmp/spdk.key-sha384.jeF /tmp/spdk.key-sha512.Vzb /tmp/spdk.key-sha512.v4S /tmp/spdk.key-sha384.S2T /tmp/spdk.key-sha256.XWm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:33.761 00:16:33.761 real 2m28.356s 00:16:33.761 user 5m41.654s 00:16:33.761 sys 0m23.213s 00:16:33.761 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.761 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.761 ************************************ 00:16:33.761 END TEST nvmf_auth_target 00:16:33.761 ************************************ 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.021 ************************************ 00:16:34.021 START TEST nvmf_bdevio_no_huge 00:16:34.021 ************************************ 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:34.021 * Looking for test storage... 00:16:34.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.021 --rc genhtml_branch_coverage=1 00:16:34.021 --rc genhtml_function_coverage=1 00:16:34.021 --rc genhtml_legend=1 00:16:34.021 --rc geninfo_all_blocks=1 00:16:34.021 --rc geninfo_unexecuted_blocks=1 00:16:34.021 00:16:34.021 ' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.021 --rc genhtml_branch_coverage=1 00:16:34.021 --rc genhtml_function_coverage=1 00:16:34.021 --rc genhtml_legend=1 00:16:34.021 --rc geninfo_all_blocks=1 00:16:34.021 --rc geninfo_unexecuted_blocks=1 00:16:34.021 00:16:34.021 ' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.021 --rc genhtml_branch_coverage=1 00:16:34.021 --rc genhtml_function_coverage=1 00:16:34.021 --rc genhtml_legend=1 00:16:34.021 --rc geninfo_all_blocks=1 00:16:34.021 --rc geninfo_unexecuted_blocks=1 00:16:34.021 00:16:34.021 ' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.021 --rc genhtml_branch_coverage=1 00:16:34.021 --rc genhtml_function_coverage=1 00:16:34.021 --rc genhtml_legend=1 00:16:34.021 --rc geninfo_all_blocks=1 00:16:34.021 --rc geninfo_unexecuted_blocks=1 00:16:34.021 00:16:34.021 ' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.021 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.022 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.280 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:39.549 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:39.549 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:39.549 Found net devices under 0000:86:00.0: cvl_0_0 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:39.549 Found net devices under 0000:86:00.1: cvl_0_1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.549 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:16:39.550 00:16:39.550 --- 10.0.0.2 ping statistics --- 00:16:39.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.550 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:16:39.550 00:16:39.550 --- 10.0.0.1 ping statistics --- 00:16:39.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.550 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2821430 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2821430 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2821430 ']' 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.550 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:39.808 [2024-11-04 16:28:06.418080] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:16:39.808 [2024-11-04 16:28:06.418129] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:39.808 [2024-11-04 16:28:06.493417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.808 [2024-11-04 16:28:06.539930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.808 [2024-11-04 16:28:06.539961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.808 [2024-11-04 16:28:06.539969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.808 [2024-11-04 16:28:06.539976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.808 [2024-11-04 16:28:06.539981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.808 [2024-11-04 16:28:06.541167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:39.808 [2024-11-04 16:28:06.541275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:39.808 [2024-11-04 16:28:06.541380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.808 [2024-11-04 16:28:06.541381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 [2024-11-04 16:28:06.693029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 Malloc0 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.066 [2024-11-04 16:28:06.729317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:40.066 { 00:16:40.066 "params": { 00:16:40.066 "name": "Nvme$subsystem", 00:16:40.066 "trtype": "$TEST_TRANSPORT", 00:16:40.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.066 "adrfam": "ipv4", 00:16:40.066 "trsvcid": "$NVMF_PORT", 00:16:40.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.066 "hdgst": ${hdgst:-false}, 00:16:40.066 "ddgst": ${ddgst:-false} 00:16:40.066 }, 00:16:40.066 "method": "bdev_nvme_attach_controller" 00:16:40.066 } 00:16:40.066 EOF 00:16:40.066 )") 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:40.066 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:40.066 "params": { 00:16:40.066 "name": "Nvme1", 00:16:40.066 "trtype": "tcp", 00:16:40.066 "traddr": "10.0.0.2", 00:16:40.066 "adrfam": "ipv4", 00:16:40.066 "trsvcid": "4420", 00:16:40.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.066 "hdgst": false, 00:16:40.066 "ddgst": false 00:16:40.066 }, 00:16:40.066 "method": "bdev_nvme_attach_controller" 00:16:40.066 }' 00:16:40.066 [2024-11-04 16:28:06.778869] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:16:40.066 [2024-11-04 16:28:06.778913] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2821647 ] 00:16:40.066 [2024-11-04 16:28:06.845849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.324 [2024-11-04 16:28:06.893761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.324 [2024-11-04 16:28:06.893850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.324 [2024-11-04 16:28:06.893851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.581 I/O targets: 00:16:40.581 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:40.581 00:16:40.581 00:16:40.581 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.581 http://cunit.sourceforge.net/ 00:16:40.581 00:16:40.581 00:16:40.581 Suite: bdevio tests on: Nvme1n1 00:16:40.581 Test: blockdev write read block ...passed 00:16:40.581 Test: blockdev write zeroes read block ...passed 00:16:40.581 Test: blockdev write zeroes read no split ...passed 00:16:40.581 Test: blockdev write zeroes read split ...passed 00:16:40.581 Test: blockdev write zeroes read split partial ...passed 00:16:40.581 Test: blockdev reset ...[2024-11-04 16:28:07.337901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:40.581 [2024-11-04 16:28:07.337963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101b920 (9): Bad file descriptor 00:16:40.581 [2024-11-04 16:28:07.355507] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:40.581 passed 00:16:40.581 Test: blockdev write read 8 blocks ...passed 00:16:40.581 Test: blockdev write read size > 128k ...passed 00:16:40.581 Test: blockdev write read invalid size ...passed 00:16:40.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.838 Test: blockdev write read max offset ...passed 00:16:40.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.838 Test: blockdev writev readv 8 blocks ...passed 00:16:40.838 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.838 Test: blockdev writev readv block ...passed 00:16:40.838 Test: blockdev writev readv size > 128k ...passed 00:16:40.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.839 Test: blockdev comparev and writev ...[2024-11-04 16:28:07.568358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.568399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.568657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.568679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.568918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.568939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.568946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.569194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.569203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.569214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.839 [2024-11-04 16:28:07.569225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:40.839 passed 00:16:40.839 Test: blockdev nvme passthru rw ...passed 00:16:40.839 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:28:07.652950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.839 [2024-11-04 16:28:07.652967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.653070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.839 [2024-11-04 16:28:07.653079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.653183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.839 [2024-11-04 16:28:07.653192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:40.839 [2024-11-04 16:28:07.653292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.839 [2024-11-04 16:28:07.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:40.839 passed 00:16:41.096 Test: blockdev nvme admin passthru ...passed 00:16:41.096 Test: blockdev copy ...passed 00:16:41.096 00:16:41.096 Run Summary: Type Total Ran Passed Failed Inactive 00:16:41.096 suites 1 1 n/a 0 0 00:16:41.096 tests 23 23 23 0 0 00:16:41.097 asserts 152 152 152 0 n/a 00:16:41.097 00:16:41.097 Elapsed time = 1.067 seconds 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.354 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.354 rmmod nvme_tcp 00:16:41.354 rmmod nvme_fabrics 00:16:41.354 rmmod nvme_keyring 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2821430 ']' 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2821430 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2821430 ']' 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2821430 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821430 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821430' 00:16:41.354 killing process with pid 2821430 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2821430 00:16:41.354 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2821430 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.613 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:44.152 00:16:44.152 real 0m9.811s 00:16:44.152 user 0m10.962s 00:16:44.152 sys 0m5.006s 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:44.152 ************************************ 00:16:44.152 END TEST nvmf_bdevio_no_huge 00:16:44.152 ************************************ 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.152 ************************************ 00:16:44.152 START TEST nvmf_tls 00:16:44.152 ************************************ 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:44.152 * Looking for test storage... 00:16:44.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.152 --rc genhtml_branch_coverage=1 00:16:44.152 --rc genhtml_function_coverage=1 00:16:44.152 --rc genhtml_legend=1 00:16:44.152 --rc geninfo_all_blocks=1 00:16:44.152 --rc geninfo_unexecuted_blocks=1 00:16:44.152 00:16:44.152 ' 00:16:44.152 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.153 --rc genhtml_branch_coverage=1 00:16:44.153 --rc genhtml_function_coverage=1 00:16:44.153 --rc genhtml_legend=1 00:16:44.153 --rc geninfo_all_blocks=1 00:16:44.153 --rc geninfo_unexecuted_blocks=1 00:16:44.153 00:16:44.153 ' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.153 --rc genhtml_branch_coverage=1 00:16:44.153 --rc genhtml_function_coverage=1 00:16:44.153 --rc genhtml_legend=1 00:16:44.153 --rc geninfo_all_blocks=1 00:16:44.153 --rc geninfo_unexecuted_blocks=1 00:16:44.153 00:16:44.153 ' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.153 --rc genhtml_branch_coverage=1 00:16:44.153 --rc genhtml_function_coverage=1 00:16:44.153 --rc genhtml_legend=1 00:16:44.153 --rc geninfo_all_blocks=1 00:16:44.153 --rc geninfo_unexecuted_blocks=1 00:16:44.153 00:16:44.153 ' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.153 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.417 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.417 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:16:49.417 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:49.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:49.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:49.418 Found net devices under 0000:86:00.0: cvl_0_0 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:49.418 Found net devices under 0000:86:00.1: cvl_0_1 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.418 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.419 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:49.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:16:49.677 00:16:49.677 --- 10.0.0.2 ping statistics --- 00:16:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.677 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:16:49.677 00:16:49.677 --- 10.0.0.1 ping statistics --- 00:16:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.677 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2825325 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2825325 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2825325 ']' 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.677 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.677 [2024-11-04 16:28:16.486853] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:16:49.677 [2024-11-04 16:28:16.486898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.936 [2024-11-04 16:28:16.555574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.936 [2024-11-04 16:28:16.595987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.936 [2024-11-04 16:28:16.596020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.936 [2024-11-04 16:28:16.596027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.936 [2024-11-04 16:28:16.596033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.936 [2024-11-04 16:28:16.596038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.936 [2024-11-04 16:28:16.596589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:49.936 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:50.195 true 00:16:50.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:50.454 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:50.454 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:50.454 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.454 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.454 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:50.713 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:50.713 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:50.713 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.972 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:51.231 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:51.231 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:51.231 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:51.489 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.489 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:51.748 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:51.748 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:51.748 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:51.748 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.748 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Qykfq14PcS 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.UKApxZXqUL 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Qykfq14PcS 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.UKApxZXqUL 00:16:52.007 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.265 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:52.523 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Qykfq14PcS 00:16:52.523 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qykfq14PcS 00:16:52.523 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.781 [2024-11-04 16:28:19.389980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.781 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.781 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.039 [2024-11-04 16:28:19.754910] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.039 [2024-11-04 16:28:19.755117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.039 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.297 malloc0 00:16:53.297 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.555 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qykfq14PcS 00:16:53.555 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:53.814 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Qykfq14PcS 00:17:03.897 Initializing NVMe Controllers 00:17:03.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:03.897 Initialization complete. Launching workers. 00:17:03.897 ======================================================== 00:17:03.897 Latency(us) 00:17:03.897 Device Information : IOPS MiB/s Average min max 00:17:03.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16870.98 65.90 3793.58 829.31 4766.49 00:17:03.897 ======================================================== 00:17:03.897 Total : 16870.98 65.90 3793.58 829.31 4766.49 00:17:03.897 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qykfq14PcS 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qykfq14PcS 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2827771 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2827771 /var/tmp/bdevperf.sock 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2827771 ']' 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.897 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.897 [2024-11-04 16:28:30.673218] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:03.897 [2024-11-04 16:28:30.673266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827771 ] 00:17:04.156 [2024-11-04 16:28:30.730170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.156 [2024-11-04 16:28:30.772718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.156 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.156 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:04.156 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qykfq14PcS 00:17:04.415 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:04.415 [2024-11-04 16:28:31.218860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.674 TLSTESTn1 00:17:04.674 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:04.674 Running I/O for 10 seconds... 00:17:06.992 5367.00 IOPS, 20.96 MiB/s [2024-11-04T15:28:34.752Z] 5456.00 IOPS, 21.31 MiB/s [2024-11-04T15:28:35.687Z] 5550.33 IOPS, 21.68 MiB/s [2024-11-04T15:28:36.623Z] 5596.00 IOPS, 21.86 MiB/s [2024-11-04T15:28:37.559Z] 5611.60 IOPS, 21.92 MiB/s [2024-11-04T15:28:38.494Z] 5595.67 IOPS, 21.86 MiB/s [2024-11-04T15:28:39.448Z] 5579.00 IOPS, 21.79 MiB/s [2024-11-04T15:28:40.823Z] 5585.75 IOPS, 21.82 MiB/s [2024-11-04T15:28:41.759Z] 5594.22 IOPS, 21.85 MiB/s [2024-11-04T15:28:41.759Z] 5586.80 IOPS, 21.82 MiB/s 00:17:14.935 Latency(us) 00:17:14.935 [2024-11-04T15:28:41.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.935 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:14.935 Verification LBA range: start 0x0 length 0x2000 00:17:14.935 TLSTESTn1 : 10.02 5590.39 21.84 0.00 0.00 22860.39 4587.52 30333.81 00:17:14.935 [2024-11-04T15:28:41.759Z] =================================================================================================================== 00:17:14.935 [2024-11-04T15:28:41.759Z] Total : 5590.39 21.84 0.00 0.00 22860.39 4587.52 30333.81 00:17:14.935 { 00:17:14.935 "results": [ 00:17:14.935 { 00:17:14.935 "job": "TLSTESTn1", 00:17:14.935 "core_mask": "0x4", 00:17:14.935 "workload": "verify", 00:17:14.935 "status": "finished", 00:17:14.935 "verify_range": { 00:17:14.935 "start": 0, 00:17:14.935 "length": 8192 00:17:14.935 }, 00:17:14.935 "queue_depth": 128, 00:17:14.935 "io_size": 4096, 00:17:14.935 "runtime": 10.016472, 00:17:14.935 "iops": 5590.391507109489, 00:17:14.935 "mibps": 21.83746682464644, 00:17:14.935 "io_failed": 0, 00:17:14.935 "io_timeout": 0, 00:17:14.935 "avg_latency_us": 22860.386556590776, 00:17:14.936 "min_latency_us": 4587.52, 00:17:14.936 "max_latency_us": 30333.805714285714 00:17:14.936 } 00:17:14.936 ], 00:17:14.936 "core_count": 1 00:17:14.936 } 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2827771 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2827771 ']' 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2827771 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827771 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827771' 00:17:14.936 killing process with pid 2827771 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2827771 00:17:14.936 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.936 00:17:14.936 Latency(us) 00:17:14.936 [2024-11-04T15:28:41.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.936 [2024-11-04T15:28:41.760Z] =================================================================================================================== 00:17:14.936 [2024-11-04T15:28:41.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2827771 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UKApxZXqUL 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UKApxZXqUL 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UKApxZXqUL 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UKApxZXqUL 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829423 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829423 /var/tmp/bdevperf.sock 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2829423 ']' 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.936 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.936 [2024-11-04 16:28:41.705536] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:14.936 [2024-11-04 16:28:41.705587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829423 ] 00:17:15.195 [2024-11-04 16:28:41.766535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.195 [2024-11-04 16:28:41.806237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.195 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.195 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:15.195 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UKApxZXqUL 00:17:15.453 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:15.453 [2024-11-04 16:28:42.251742] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.453 [2024-11-04 16:28:42.256479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:15.453 [2024-11-04 16:28:42.257119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bc170 (107): Transport endpoint is not connected 00:17:15.453 [2024-11-04 16:28:42.258111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bc170 (9): Bad file descriptor 00:17:15.453 [2024-11-04 16:28:42.259113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:15.453 [2024-11-04 16:28:42.259122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:15.453 [2024-11-04 16:28:42.259130] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:15.453 [2024-11-04 16:28:42.259140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:15.453 request: 00:17:15.453 { 00:17:15.453 "name": "TLSTEST", 00:17:15.453 "trtype": "tcp", 00:17:15.453 "traddr": "10.0.0.2", 00:17:15.454 "adrfam": "ipv4", 00:17:15.454 "trsvcid": "4420", 00:17:15.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.454 "prchk_reftag": false, 00:17:15.454 "prchk_guard": false, 00:17:15.454 "hdgst": false, 00:17:15.454 "ddgst": false, 00:17:15.454 "psk": "key0", 00:17:15.454 "allow_unrecognized_csi": false, 00:17:15.454 "method": "bdev_nvme_attach_controller", 00:17:15.454 "req_id": 1 00:17:15.454 } 00:17:15.454 Got JSON-RPC error response 00:17:15.454 response: 00:17:15.454 { 00:17:15.454 "code": -5, 00:17:15.454 "message": "Input/output error" 00:17:15.454 } 00:17:15.454 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2829423 00:17:15.454 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2829423 ']' 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2829423 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829423 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829423' 00:17:15.714 killing process with pid 2829423 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2829423 00:17:15.714 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.714 00:17:15.714 Latency(us) 00:17:15.714 [2024-11-04T15:28:42.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.714 [2024-11-04T15:28:42.538Z] =================================================================================================================== 00:17:15.714 [2024-11-04T15:28:42.538Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2829423 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qykfq14PcS 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qykfq14PcS 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Qykfq14PcS 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qykfq14PcS 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829636 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829636 /var/tmp/bdevperf.sock 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2829636 ']' 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.714 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.714 [2024-11-04 16:28:42.513225] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:15.714 [2024-11-04 16:28:42.513277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829636 ] 00:17:15.973 [2024-11-04 16:28:42.571156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.973 [2024-11-04 16:28:42.607856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.973 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.973 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:15.973 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qykfq14PcS 00:17:16.232 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:16.232 [2024-11-04 16:28:43.029813] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.232 [2024-11-04 16:28:43.035973] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.232 [2024-11-04 16:28:43.035995] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.232 [2024-11-04 16:28:43.036018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:16.232 [2024-11-04 16:28:43.036117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a170 (107): Transport endpoint is not connected 00:17:16.232 [2024-11-04 16:28:43.037102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a170 (9): Bad file descriptor 00:17:16.232 [2024-11-04 16:28:43.038104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:16.232 [2024-11-04 16:28:43.038115] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:16.232 [2024-11-04 16:28:43.038123] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:16.232 [2024-11-04 16:28:43.038134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:16.232 request: 00:17:16.232 { 00:17:16.232 "name": "TLSTEST", 00:17:16.232 "trtype": "tcp", 00:17:16.232 "traddr": "10.0.0.2", 00:17:16.232 "adrfam": "ipv4", 00:17:16.232 "trsvcid": "4420", 00:17:16.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:16.232 "prchk_reftag": false, 00:17:16.232 "prchk_guard": false, 00:17:16.232 "hdgst": false, 00:17:16.232 "ddgst": false, 00:17:16.232 "psk": "key0", 00:17:16.232 "allow_unrecognized_csi": false, 00:17:16.232 "method": "bdev_nvme_attach_controller", 00:17:16.232 "req_id": 1 00:17:16.232 } 00:17:16.232 Got JSON-RPC error response 00:17:16.232 response: 00:17:16.232 { 00:17:16.232 "code": -5, 00:17:16.232 "message": "Input/output error" 00:17:16.232 } 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2829636 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2829636 ']' 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2829636 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829636 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829636' 00:17:16.492 killing process with pid 2829636 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2829636 00:17:16.492 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.492 00:17:16.492 Latency(us) 00:17:16.492 [2024-11-04T15:28:43.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.492 [2024-11-04T15:28:43.316Z] =================================================================================================================== 00:17:16.492 [2024-11-04T15:28:43.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2829636 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qykfq14PcS 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qykfq14PcS 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qykfq14PcS 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qykfq14PcS 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829799 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829799 /var/tmp/bdevperf.sock 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2829799 ']' 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.492 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.492 [2024-11-04 16:28:43.311364] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:16.492 [2024-11-04 16:28:43.311417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829799 ] 00:17:16.751 [2024-11-04 16:28:43.372232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.751 [2024-11-04 16:28:43.411291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.751 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.751 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:16.751 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qykfq14PcS 00:17:17.009 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:17.268 [2024-11-04 16:28:43.856777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.268 [2024-11-04 16:28:43.868075] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.268 [2024-11-04 16:28:43.868094] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.268 [2024-11-04 16:28:43.868116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.268 [2024-11-04 16:28:43.869183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123f170 (107): Transport endpoint is not connected 00:17:17.268 [2024-11-04 16:28:43.870177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123f170 (9): Bad file descriptor 00:17:17.268 [2024-11-04 16:28:43.871179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:17.268 [2024-11-04 16:28:43.871189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.269 [2024-11-04 16:28:43.871196] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:17.269 [2024-11-04 16:28:43.871206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:17.269 request: 00:17:17.269 { 00:17:17.269 "name": "TLSTEST", 00:17:17.269 "trtype": "tcp", 00:17:17.269 "traddr": "10.0.0.2", 00:17:17.269 "adrfam": "ipv4", 00:17:17.269 "trsvcid": "4420", 00:17:17.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.269 "prchk_reftag": false, 00:17:17.269 "prchk_guard": false, 00:17:17.269 "hdgst": false, 00:17:17.269 "ddgst": false, 00:17:17.269 "psk": "key0", 00:17:17.269 "allow_unrecognized_csi": false, 00:17:17.269 "method": "bdev_nvme_attach_controller", 00:17:17.269 "req_id": 1 00:17:17.269 } 00:17:17.269 Got JSON-RPC error response 00:17:17.269 response: 00:17:17.269 { 00:17:17.269 "code": -5, 00:17:17.269 "message": "Input/output error" 00:17:17.269 } 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2829799 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2829799 ']' 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2829799 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829799 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829799' 00:17:17.269 killing process with pid 2829799 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2829799 00:17:17.269 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.269 00:17:17.269 Latency(us) 00:17:17.269 [2024-11-04T15:28:44.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.269 [2024-11-04T15:28:44.093Z] =================================================================================================================== 00:17:17.269 [2024-11-04T15:28:44.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.269 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2829799 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.269 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.527 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2829887 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2829887 /var/tmp/bdevperf.sock 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2829887 ']' 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.528 [2024-11-04 16:28:44.143336] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:17.528 [2024-11-04 16:28:44.143385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829887 ] 00:17:17.528 [2024-11-04 16:28:44.202159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.528 [2024-11-04 16:28:44.242129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:17.528 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:17.786 [2024-11-04 16:28:44.503480] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:17.786 [2024-11-04 16:28:44.503512] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:17.786 request: 00:17:17.786 { 00:17:17.786 "name": "key0", 00:17:17.786 "path": "", 00:17:17.786 "method": "keyring_file_add_key", 00:17:17.786 "req_id": 1 00:17:17.786 } 00:17:17.786 Got JSON-RPC error response 00:17:17.786 response: 00:17:17.786 { 00:17:17.786 "code": -1, 00:17:17.786 "message": "Operation not permitted" 00:17:17.786 } 00:17:17.786 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:18.045 [2024-11-04 16:28:44.704092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.045 [2024-11-04 16:28:44.704117] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:18.045 request: 00:17:18.045 { 00:17:18.045 "name": "TLSTEST", 00:17:18.045 "trtype": "tcp", 00:17:18.045 "traddr": "10.0.0.2", 00:17:18.045 "adrfam": "ipv4", 00:17:18.045 "trsvcid": "4420", 00:17:18.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.045 "prchk_reftag": false, 00:17:18.045 "prchk_guard": false, 00:17:18.045 "hdgst": false, 00:17:18.045 "ddgst": false, 00:17:18.045 "psk": "key0", 00:17:18.045 "allow_unrecognized_csi": false, 00:17:18.045 "method": "bdev_nvme_attach_controller", 00:17:18.045 "req_id": 1 00:17:18.045 } 00:17:18.045 Got JSON-RPC error response 00:17:18.045 response: 00:17:18.045 { 00:17:18.045 "code": -126, 00:17:18.045 "message": "Required key not available" 00:17:18.045 } 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2829887 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2829887 ']' 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2829887 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829887 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829887' 00:17:18.045 killing process with pid 2829887 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2829887 00:17:18.045 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.045 00:17:18.045 Latency(us) 00:17:18.045 [2024-11-04T15:28:44.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.045 [2024-11-04T15:28:44.869Z] =================================================================================================================== 00:17:18.045 [2024-11-04T15:28:44.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.045 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2829887 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2825325 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2825325 ']' 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2825325 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825325 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825325' 00:17:18.305 killing process with pid 2825325 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2825325 00:17:18.305 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2825325 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.0e6TiEhu3u 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.0e6TiEhu3u 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2830133 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2830133 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2830133 ']' 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.564 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.564 [2024-11-04 16:28:45.238838] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:18.564 [2024-11-04 16:28:45.238888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.564 [2024-11-04 16:28:45.305594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.564 [2024-11-04 16:28:45.345431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.564 [2024-11-04 16:28:45.345469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.564 [2024-11-04 16:28:45.345476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.564 [2024-11-04 16:28:45.345482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.564 [2024-11-04 16:28:45.345486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.564 [2024-11-04 16:28:45.346093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0e6TiEhu3u 00:17:18.823 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:18.823 [2024-11-04 16:28:45.639898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.081 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.081 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:19.339 [2024-11-04 16:28:46.020880] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.339 [2024-11-04 16:28:46.021092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.339 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:19.597 malloc0 00:17:19.597 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:19.597 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:19.855 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:20.113 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0e6TiEhu3u 00:17:20.113 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.113 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.113 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0e6TiEhu3u 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2830387 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2830387 /var/tmp/bdevperf.sock 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2830387 ']' 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.114 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.114 [2024-11-04 16:28:46.817020] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:20.114 [2024-11-04 16:28:46.817069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830387 ] 00:17:20.114 [2024-11-04 16:28:46.875769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.114 [2024-11-04 16:28:46.915983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.372 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.372 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:20.372 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:20.372 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:20.630 [2024-11-04 16:28:47.353633] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.630 TLSTESTn1 00:17:20.630 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:20.888 Running I/O for 10 seconds... 00:17:22.757 5098.00 IOPS, 19.91 MiB/s [2024-11-04T15:28:50.960Z] 5372.50 IOPS, 20.99 MiB/s [2024-11-04T15:28:51.893Z] 5366.33 IOPS, 20.96 MiB/s [2024-11-04T15:28:52.828Z] 5394.75 IOPS, 21.07 MiB/s [2024-11-04T15:28:53.763Z] 5438.40 IOPS, 21.24 MiB/s [2024-11-04T15:28:54.697Z] 5415.17 IOPS, 21.15 MiB/s [2024-11-04T15:28:55.632Z] 5442.00 IOPS, 21.26 MiB/s [2024-11-04T15:28:56.568Z] 5463.25 IOPS, 21.34 MiB/s [2024-11-04T15:28:57.943Z] 5473.33 IOPS, 21.38 MiB/s [2024-11-04T15:28:57.943Z] 5474.10 IOPS, 21.38 MiB/s 00:17:31.119 Latency(us) 00:17:31.119 [2024-11-04T15:28:57.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.119 Verification LBA range: start 0x0 length 0x2000 00:17:31.119 TLSTESTn1 : 10.02 5478.27 21.40 0.00 0.00 23329.33 6553.60 57422.02 00:17:31.119 [2024-11-04T15:28:57.943Z] =================================================================================================================== 00:17:31.119 [2024-11-04T15:28:57.943Z] Total : 5478.27 21.40 0.00 0.00 23329.33 6553.60 57422.02 00:17:31.119 { 00:17:31.119 "results": [ 00:17:31.119 { 00:17:31.119 "job": "TLSTESTn1", 00:17:31.119 "core_mask": "0x4", 00:17:31.119 "workload": "verify", 00:17:31.119 "status": "finished", 00:17:31.119 "verify_range": { 00:17:31.119 "start": 0, 00:17:31.119 "length": 8192 00:17:31.119 }, 00:17:31.119 "queue_depth": 128, 00:17:31.119 "io_size": 4096, 00:17:31.119 "runtime": 10.015574, 00:17:31.119 "iops": 5478.26814519068, 00:17:31.119 "mibps": 21.399484942151094, 00:17:31.119 "io_failed": 0, 00:17:31.119 "io_timeout": 0, 00:17:31.119 "avg_latency_us": 23329.33274511642, 00:17:31.119 "min_latency_us": 6553.6, 00:17:31.119 "max_latency_us": 57422.01904761905 00:17:31.119 } 00:17:31.119 ], 00:17:31.119 "core_count": 1 00:17:31.119 } 00:17:31.119 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.119 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2830387 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2830387 ']' 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2830387 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830387 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830387' 00:17:31.120 killing process with pid 2830387 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2830387 00:17:31.120 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.120 00:17:31.120 Latency(us) 00:17:31.120 [2024-11-04T15:28:57.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.120 [2024-11-04T15:28:57.944Z] =================================================================================================================== 00:17:31.120 [2024-11-04T15:28:57.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2830387 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.0e6TiEhu3u 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0e6TiEhu3u 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0e6TiEhu3u 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0e6TiEhu3u 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0e6TiEhu3u 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2832221 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2832221 /var/tmp/bdevperf.sock 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2832221 ']' 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.120 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.120 [2024-11-04 16:28:57.848927] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:31.120 [2024-11-04 16:28:57.848980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832221 ] 00:17:31.120 [2024-11-04 16:28:57.906489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.120 [2024-11-04 16:28:57.942853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.378 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.378 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:31.378 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:31.378 [2024-11-04 16:28:58.199929] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0e6TiEhu3u': 0100666 00:17:31.378 [2024-11-04 16:28:58.199957] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:31.637 request: 00:17:31.637 { 00:17:31.637 "name": "key0", 00:17:31.637 "path": "/tmp/tmp.0e6TiEhu3u", 00:17:31.637 "method": "keyring_file_add_key", 00:17:31.637 "req_id": 1 00:17:31.637 } 00:17:31.637 Got JSON-RPC error response 00:17:31.637 response: 00:17:31.637 { 00:17:31.637 "code": -1, 00:17:31.637 "message": "Operation not permitted" 00:17:31.637 } 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:31.637 [2024-11-04 16:28:58.388492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.637 [2024-11-04 16:28:58.388516] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:31.637 request: 00:17:31.637 { 00:17:31.637 "name": "TLSTEST", 00:17:31.637 "trtype": "tcp", 00:17:31.637 "traddr": "10.0.0.2", 00:17:31.637 "adrfam": "ipv4", 00:17:31.637 "trsvcid": "4420", 00:17:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.637 "prchk_reftag": false, 00:17:31.637 "prchk_guard": false, 00:17:31.637 "hdgst": false, 00:17:31.637 "ddgst": false, 00:17:31.637 "psk": "key0", 00:17:31.637 "allow_unrecognized_csi": false, 00:17:31.637 "method": "bdev_nvme_attach_controller", 00:17:31.637 "req_id": 1 00:17:31.637 } 00:17:31.637 Got JSON-RPC error response 00:17:31.637 response: 00:17:31.637 { 00:17:31.637 "code": -126, 00:17:31.637 "message": "Required key not available" 00:17:31.637 } 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2832221 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2832221 ']' 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2832221 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832221 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832221' 00:17:31.637 killing process with pid 2832221 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2832221 00:17:31.637 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.637 00:17:31.637 Latency(us) 00:17:31.637 [2024-11-04T15:28:58.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.637 [2024-11-04T15:28:58.461Z] =================================================================================================================== 00:17:31.637 [2024-11-04T15:28:58.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.637 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2832221 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2830133 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2830133 ']' 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2830133 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830133 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830133' 00:17:31.895 killing process with pid 2830133 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2830133 00:17:31.895 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2830133 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2832412 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2832412 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2832412 ']' 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.154 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.155 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.155 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.155 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.155 [2024-11-04 16:28:58.867744] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:32.155 [2024-11-04 16:28:58.867792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.155 [2024-11-04 16:28:58.934863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.155 [2024-11-04 16:28:58.972044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.155 [2024-11-04 16:28:58.972079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.155 [2024-11-04 16:28:58.972086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.155 [2024-11-04 16:28:58.972092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.155 [2024-11-04 16:28:58.972097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.155 [2024-11-04 16:28:58.972679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0e6TiEhu3u 00:17:32.413 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:32.672 [2024-11-04 16:28:59.275172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.672 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.672 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.930 [2024-11-04 16:28:59.636079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.930 [2024-11-04 16:28:59.636297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.930 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.188 malloc0 00:17:33.188 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.446 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:33.446 [2024-11-04 16:29:00.213708] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0e6TiEhu3u': 0100666 00:17:33.446 [2024-11-04 16:29:00.213742] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:33.446 request: 00:17:33.446 { 00:17:33.446 "name": "key0", 00:17:33.446 "path": "/tmp/tmp.0e6TiEhu3u", 00:17:33.446 "method": "keyring_file_add_key", 00:17:33.446 "req_id": 1 00:17:33.446 } 00:17:33.446 Got JSON-RPC error response 00:17:33.446 response: 00:17:33.446 { 00:17:33.446 "code": -1, 00:17:33.446 "message": "Operation not permitted" 00:17:33.446 } 00:17:33.446 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.705 [2024-11-04 16:29:00.410249] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:33.705 [2024-11-04 16:29:00.410286] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:33.705 request: 00:17:33.705 { 00:17:33.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.705 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.705 "psk": "key0", 00:17:33.705 "method": "nvmf_subsystem_add_host", 00:17:33.705 "req_id": 1 00:17:33.705 } 00:17:33.705 Got JSON-RPC error response 00:17:33.705 response: 00:17:33.705 { 00:17:33.705 "code": -32603, 00:17:33.705 "message": "Internal error" 00:17:33.705 } 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2832412 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2832412 ']' 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2832412 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832412 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832412' 00:17:33.705 killing process with pid 2832412 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2832412 00:17:33.705 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2832412 00:17:33.963 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.0e6TiEhu3u 00:17:33.963 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:33.963 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.963 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.963 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2832728 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2832728 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2832728 ']' 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.964 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 [2024-11-04 16:29:00.706734] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:33.964 [2024-11-04 16:29:00.706779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.964 [2024-11-04 16:29:00.773186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.222 [2024-11-04 16:29:00.814979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.222 [2024-11-04 16:29:00.815012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.222 [2024-11-04 16:29:00.815019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.222 [2024-11-04 16:29:00.815025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.222 [2024-11-04 16:29:00.815030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.222 [2024-11-04 16:29:00.815628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0e6TiEhu3u 00:17:34.222 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.481 [2024-11-04 16:29:01.111124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.481 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.739 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.739 [2024-11-04 16:29:01.480103] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.739 [2024-11-04 16:29:01.480295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.739 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.997 malloc0 00:17:34.997 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.255 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:35.255 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2832981 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2832981 /var/tmp/bdevperf.sock 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2832981 ']' 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.514 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.514 [2024-11-04 16:29:02.249599] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:35.514 [2024-11-04 16:29:02.249659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832981 ] 00:17:35.514 [2024-11-04 16:29:02.306949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.772 [2024-11-04 16:29:02.347747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.772 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.772 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:35.772 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:36.030 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:36.031 [2024-11-04 16:29:02.789814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.289 TLSTESTn1 00:17:36.289 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:36.548 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:36.548 "subsystems": [ 00:17:36.548 { 00:17:36.548 "subsystem": "keyring", 00:17:36.548 "config": [ 00:17:36.548 { 00:17:36.548 "method": "keyring_file_add_key", 00:17:36.548 "params": { 00:17:36.548 "name": "key0", 00:17:36.548 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:36.548 } 00:17:36.548 } 00:17:36.548 ] 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "subsystem": "iobuf", 00:17:36.548 "config": [ 00:17:36.548 { 00:17:36.548 "method": "iobuf_set_options", 00:17:36.548 "params": { 00:17:36.548 "small_pool_count": 8192, 00:17:36.548 "large_pool_count": 1024, 00:17:36.548 "small_bufsize": 8192, 00:17:36.548 "large_bufsize": 135168, 00:17:36.548 "enable_numa": false 00:17:36.548 } 00:17:36.548 } 00:17:36.548 ] 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "subsystem": "sock", 00:17:36.548 "config": [ 00:17:36.548 { 00:17:36.548 "method": "sock_set_default_impl", 00:17:36.548 "params": { 00:17:36.548 "impl_name": "posix" 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "method": "sock_impl_set_options", 00:17:36.548 "params": { 00:17:36.548 "impl_name": "ssl", 00:17:36.548 "recv_buf_size": 4096, 00:17:36.548 "send_buf_size": 4096, 00:17:36.548 "enable_recv_pipe": true, 00:17:36.548 "enable_quickack": false, 00:17:36.548 "enable_placement_id": 0, 00:17:36.548 "enable_zerocopy_send_server": true, 00:17:36.548 "enable_zerocopy_send_client": false, 00:17:36.548 "zerocopy_threshold": 0, 00:17:36.548 "tls_version": 0, 00:17:36.548 "enable_ktls": false 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "method": "sock_impl_set_options", 00:17:36.548 "params": { 00:17:36.548 "impl_name": "posix", 00:17:36.548 "recv_buf_size": 2097152, 00:17:36.548 "send_buf_size": 2097152, 00:17:36.548 "enable_recv_pipe": true, 00:17:36.548 "enable_quickack": false, 00:17:36.548 "enable_placement_id": 0, 00:17:36.548 "enable_zerocopy_send_server": true, 00:17:36.548 "enable_zerocopy_send_client": false, 00:17:36.548 "zerocopy_threshold": 0, 00:17:36.548 "tls_version": 0, 00:17:36.548 "enable_ktls": false 00:17:36.548 } 00:17:36.548 } 00:17:36.548 ] 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "subsystem": "vmd", 00:17:36.548 "config": [] 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "subsystem": "accel", 00:17:36.548 "config": [ 00:17:36.548 { 00:17:36.548 "method": "accel_set_options", 00:17:36.548 "params": { 00:17:36.548 "small_cache_size": 128, 00:17:36.548 "large_cache_size": 16, 00:17:36.548 "task_count": 2048, 00:17:36.548 "sequence_count": 2048, 00:17:36.548 "buf_count": 2048 00:17:36.548 } 00:17:36.548 } 00:17:36.548 ] 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "subsystem": "bdev", 00:17:36.548 "config": [ 00:17:36.548 { 00:17:36.548 "method": "bdev_set_options", 00:17:36.548 "params": { 00:17:36.548 "bdev_io_pool_size": 65535, 00:17:36.548 "bdev_io_cache_size": 256, 00:17:36.548 "bdev_auto_examine": true, 00:17:36.548 "iobuf_small_cache_size": 128, 00:17:36.548 "iobuf_large_cache_size": 16 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "method": "bdev_raid_set_options", 00:17:36.548 "params": { 00:17:36.548 "process_window_size_kb": 1024, 00:17:36.548 "process_max_bandwidth_mb_sec": 0 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "method": "bdev_iscsi_set_options", 00:17:36.548 "params": { 00:17:36.548 "timeout_sec": 30 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "method": "bdev_nvme_set_options", 00:17:36.548 "params": { 00:17:36.548 "action_on_timeout": "none", 00:17:36.548 "timeout_us": 0, 00:17:36.548 "timeout_admin_us": 0, 00:17:36.549 "keep_alive_timeout_ms": 10000, 00:17:36.549 "arbitration_burst": 0, 00:17:36.549 "low_priority_weight": 0, 00:17:36.549 "medium_priority_weight": 0, 00:17:36.549 "high_priority_weight": 0, 00:17:36.549 "nvme_adminq_poll_period_us": 10000, 00:17:36.549 "nvme_ioq_poll_period_us": 0, 00:17:36.549 "io_queue_requests": 0, 00:17:36.549 "delay_cmd_submit": true, 00:17:36.549 "transport_retry_count": 4, 00:17:36.549 "bdev_retry_count": 3, 00:17:36.549 "transport_ack_timeout": 0, 00:17:36.549 "ctrlr_loss_timeout_sec": 0, 00:17:36.549 "reconnect_delay_sec": 0, 00:17:36.549 "fast_io_fail_timeout_sec": 0, 00:17:36.549 "disable_auto_failback": false, 00:17:36.549 "generate_uuids": false, 00:17:36.549 "transport_tos": 0, 00:17:36.549 "nvme_error_stat": false, 00:17:36.549 "rdma_srq_size": 0, 00:17:36.549 "io_path_stat": false, 00:17:36.549 "allow_accel_sequence": false, 00:17:36.549 "rdma_max_cq_size": 0, 00:17:36.549 "rdma_cm_event_timeout_ms": 0, 00:17:36.549 "dhchap_digests": [ 00:17:36.549 "sha256", 00:17:36.549 "sha384", 00:17:36.549 "sha512" 00:17:36.549 ], 00:17:36.549 "dhchap_dhgroups": [ 00:17:36.549 "null", 00:17:36.549 "ffdhe2048", 00:17:36.549 "ffdhe3072", 00:17:36.549 "ffdhe4096", 00:17:36.549 "ffdhe6144", 00:17:36.549 "ffdhe8192" 00:17:36.549 ] 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "bdev_nvme_set_hotplug", 00:17:36.549 "params": { 00:17:36.549 "period_us": 100000, 00:17:36.549 "enable": false 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "bdev_malloc_create", 00:17:36.549 "params": { 00:17:36.549 "name": "malloc0", 00:17:36.549 "num_blocks": 8192, 00:17:36.549 "block_size": 4096, 00:17:36.549 "physical_block_size": 4096, 00:17:36.549 "uuid": "d6312892-d351-4271-af5c-5bc4793c700b", 00:17:36.549 "optimal_io_boundary": 0, 00:17:36.549 "md_size": 0, 00:17:36.549 "dif_type": 0, 00:17:36.549 "dif_is_head_of_md": false, 00:17:36.549 "dif_pi_format": 0 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "bdev_wait_for_examine" 00:17:36.549 } 00:17:36.549 ] 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "subsystem": "nbd", 00:17:36.549 "config": [] 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "subsystem": "scheduler", 00:17:36.549 "config": [ 00:17:36.549 { 00:17:36.549 "method": "framework_set_scheduler", 00:17:36.549 "params": { 00:17:36.549 "name": "static" 00:17:36.549 } 00:17:36.549 } 00:17:36.549 ] 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "subsystem": "nvmf", 00:17:36.549 "config": [ 00:17:36.549 { 00:17:36.549 "method": "nvmf_set_config", 00:17:36.549 "params": { 00:17:36.549 "discovery_filter": "match_any", 00:17:36.549 "admin_cmd_passthru": { 00:17:36.549 "identify_ctrlr": false 00:17:36.549 }, 00:17:36.549 "dhchap_digests": [ 00:17:36.549 "sha256", 00:17:36.549 "sha384", 00:17:36.549 "sha512" 00:17:36.549 ], 00:17:36.549 "dhchap_dhgroups": [ 00:17:36.549 "null", 00:17:36.549 "ffdhe2048", 00:17:36.549 "ffdhe3072", 00:17:36.549 "ffdhe4096", 00:17:36.549 "ffdhe6144", 00:17:36.549 "ffdhe8192" 00:17:36.549 ] 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "nvmf_set_max_subsystems", 00:17:36.549 "params": { 00:17:36.549 "max_subsystems": 1024 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "nvmf_set_crdt", 00:17:36.549 "params": { 00:17:36.549 "crdt1": 0, 00:17:36.549 "crdt2": 0, 00:17:36.549 "crdt3": 0 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "nvmf_create_transport", 00:17:36.549 "params": { 00:17:36.549 "trtype": "TCP", 00:17:36.549 "max_queue_depth": 128, 00:17:36.549 "max_io_qpairs_per_ctrlr": 127, 00:17:36.549 "in_capsule_data_size": 4096, 00:17:36.549 "max_io_size": 131072, 00:17:36.549 "io_unit_size": 131072, 00:17:36.549 "max_aq_depth": 128, 00:17:36.549 "num_shared_buffers": 511, 00:17:36.549 "buf_cache_size": 4294967295, 00:17:36.549 "dif_insert_or_strip": false, 00:17:36.549 "zcopy": false, 00:17:36.549 "c2h_success": false, 00:17:36.549 "sock_priority": 0, 00:17:36.549 "abort_timeout_sec": 1, 00:17:36.549 "ack_timeout": 0, 00:17:36.549 "data_wr_pool_size": 0 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "nvmf_create_subsystem", 00:17:36.549 "params": { 00:17:36.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.549 "allow_any_host": false, 00:17:36.549 "serial_number": "SPDK00000000000001", 00:17:36.549 "model_number": "SPDK bdev Controller", 00:17:36.549 "max_namespaces": 10, 00:17:36.549 "min_cntlid": 1, 00:17:36.549 "max_cntlid": 65519, 00:17:36.549 "ana_reporting": false 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "method": "nvmf_subsystem_add_host", 00:17:36.549 "params": { 00:17:36.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.549 "host": "nqn.2016-06.io.spdk:host1", 00:17:36.549 "psk": "key0" 00:17:36.549 } 00:17:36.549 }, 00:17:36.549 { 00:17:36.550 "method": "nvmf_subsystem_add_ns", 00:17:36.550 "params": { 00:17:36.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.550 "namespace": { 00:17:36.550 "nsid": 1, 00:17:36.550 "bdev_name": "malloc0", 00:17:36.550 "nguid": "D6312892D3514271AF5C5BC4793C700B", 00:17:36.550 "uuid": "d6312892-d351-4271-af5c-5bc4793c700b", 00:17:36.550 "no_auto_visible": false 00:17:36.550 } 00:17:36.550 } 00:17:36.550 }, 00:17:36.550 { 00:17:36.550 "method": "nvmf_subsystem_add_listener", 00:17:36.550 "params": { 00:17:36.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.550 "listen_address": { 00:17:36.550 "trtype": "TCP", 00:17:36.550 "adrfam": "IPv4", 00:17:36.550 "traddr": "10.0.0.2", 00:17:36.550 "trsvcid": "4420" 00:17:36.550 }, 00:17:36.550 "secure_channel": true 00:17:36.550 } 00:17:36.550 } 00:17:36.550 ] 00:17:36.550 } 00:17:36.550 ] 00:17:36.550 }' 00:17:36.550 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:36.808 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:36.808 "subsystems": [ 00:17:36.808 { 00:17:36.808 "subsystem": "keyring", 00:17:36.808 "config": [ 00:17:36.808 { 00:17:36.808 "method": "keyring_file_add_key", 00:17:36.808 "params": { 00:17:36.808 "name": "key0", 00:17:36.808 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:36.809 } 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "iobuf", 00:17:36.809 "config": [ 00:17:36.809 { 00:17:36.809 "method": "iobuf_set_options", 00:17:36.809 "params": { 00:17:36.809 "small_pool_count": 8192, 00:17:36.809 "large_pool_count": 1024, 00:17:36.809 "small_bufsize": 8192, 00:17:36.809 "large_bufsize": 135168, 00:17:36.809 "enable_numa": false 00:17:36.809 } 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "sock", 00:17:36.809 "config": [ 00:17:36.809 { 00:17:36.809 "method": "sock_set_default_impl", 00:17:36.809 "params": { 00:17:36.809 "impl_name": "posix" 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "sock_impl_set_options", 00:17:36.809 "params": { 00:17:36.809 "impl_name": "ssl", 00:17:36.809 "recv_buf_size": 4096, 00:17:36.809 "send_buf_size": 4096, 00:17:36.809 "enable_recv_pipe": true, 00:17:36.809 "enable_quickack": false, 00:17:36.809 "enable_placement_id": 0, 00:17:36.809 "enable_zerocopy_send_server": true, 00:17:36.809 "enable_zerocopy_send_client": false, 00:17:36.809 "zerocopy_threshold": 0, 00:17:36.809 "tls_version": 0, 00:17:36.809 "enable_ktls": false 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "sock_impl_set_options", 00:17:36.809 "params": { 00:17:36.809 "impl_name": "posix", 00:17:36.809 "recv_buf_size": 2097152, 00:17:36.809 "send_buf_size": 2097152, 00:17:36.809 "enable_recv_pipe": true, 00:17:36.809 "enable_quickack": false, 00:17:36.809 "enable_placement_id": 0, 00:17:36.809 "enable_zerocopy_send_server": true, 00:17:36.809 "enable_zerocopy_send_client": false, 00:17:36.809 "zerocopy_threshold": 0, 00:17:36.809 "tls_version": 0, 00:17:36.809 "enable_ktls": false 00:17:36.809 } 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "vmd", 00:17:36.809 "config": [] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "accel", 00:17:36.809 "config": [ 00:17:36.809 { 00:17:36.809 "method": "accel_set_options", 00:17:36.809 "params": { 00:17:36.809 "small_cache_size": 128, 00:17:36.809 "large_cache_size": 16, 00:17:36.809 "task_count": 2048, 00:17:36.809 "sequence_count": 2048, 00:17:36.809 "buf_count": 2048 00:17:36.809 } 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "bdev", 00:17:36.809 "config": [ 00:17:36.809 { 00:17:36.809 "method": "bdev_set_options", 00:17:36.809 "params": { 00:17:36.809 "bdev_io_pool_size": 65535, 00:17:36.809 "bdev_io_cache_size": 256, 00:17:36.809 "bdev_auto_examine": true, 00:17:36.809 "iobuf_small_cache_size": 128, 00:17:36.809 "iobuf_large_cache_size": 16 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_raid_set_options", 00:17:36.809 "params": { 00:17:36.809 "process_window_size_kb": 1024, 00:17:36.809 "process_max_bandwidth_mb_sec": 0 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_iscsi_set_options", 00:17:36.809 "params": { 00:17:36.809 "timeout_sec": 30 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_nvme_set_options", 00:17:36.809 "params": { 00:17:36.809 "action_on_timeout": "none", 00:17:36.809 "timeout_us": 0, 00:17:36.809 "timeout_admin_us": 0, 00:17:36.809 "keep_alive_timeout_ms": 10000, 00:17:36.809 "arbitration_burst": 0, 00:17:36.809 "low_priority_weight": 0, 00:17:36.809 "medium_priority_weight": 0, 00:17:36.809 "high_priority_weight": 0, 00:17:36.809 "nvme_adminq_poll_period_us": 10000, 00:17:36.809 "nvme_ioq_poll_period_us": 0, 00:17:36.809 "io_queue_requests": 512, 00:17:36.809 "delay_cmd_submit": true, 00:17:36.809 "transport_retry_count": 4, 00:17:36.809 "bdev_retry_count": 3, 00:17:36.809 "transport_ack_timeout": 0, 00:17:36.809 "ctrlr_loss_timeout_sec": 0, 00:17:36.809 "reconnect_delay_sec": 0, 00:17:36.809 "fast_io_fail_timeout_sec": 0, 00:17:36.809 "disable_auto_failback": false, 00:17:36.809 "generate_uuids": false, 00:17:36.809 "transport_tos": 0, 00:17:36.809 "nvme_error_stat": false, 00:17:36.809 "rdma_srq_size": 0, 00:17:36.809 "io_path_stat": false, 00:17:36.809 "allow_accel_sequence": false, 00:17:36.809 "rdma_max_cq_size": 0, 00:17:36.809 "rdma_cm_event_timeout_ms": 0, 00:17:36.809 "dhchap_digests": [ 00:17:36.809 "sha256", 00:17:36.809 "sha384", 00:17:36.809 "sha512" 00:17:36.809 ], 00:17:36.809 "dhchap_dhgroups": [ 00:17:36.809 "null", 00:17:36.809 "ffdhe2048", 00:17:36.809 "ffdhe3072", 00:17:36.809 "ffdhe4096", 00:17:36.809 "ffdhe6144", 00:17:36.809 "ffdhe8192" 00:17:36.809 ] 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_nvme_attach_controller", 00:17:36.809 "params": { 00:17:36.809 "name": "TLSTEST", 00:17:36.809 "trtype": "TCP", 00:17:36.809 "adrfam": "IPv4", 00:17:36.809 "traddr": "10.0.0.2", 00:17:36.809 "trsvcid": "4420", 00:17:36.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.809 "prchk_reftag": false, 00:17:36.809 "prchk_guard": false, 00:17:36.809 "ctrlr_loss_timeout_sec": 0, 00:17:36.809 "reconnect_delay_sec": 0, 00:17:36.809 "fast_io_fail_timeout_sec": 0, 00:17:36.809 "psk": "key0", 00:17:36.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.809 "hdgst": false, 00:17:36.809 "ddgst": false, 00:17:36.809 "multipath": "multipath" 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_nvme_set_hotplug", 00:17:36.809 "params": { 00:17:36.809 "period_us": 100000, 00:17:36.809 "enable": false 00:17:36.809 } 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "method": "bdev_wait_for_examine" 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "subsystem": "nbd", 00:17:36.809 "config": [] 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }' 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2832981 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2832981 ']' 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2832981 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832981 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832981' 00:17:36.809 killing process with pid 2832981 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2832981 00:17:36.809 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.809 00:17:36.809 Latency(us) 00:17:36.809 [2024-11-04T15:29:03.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.809 [2024-11-04T15:29:03.633Z] =================================================================================================================== 00:17:36.809 [2024-11-04T15:29:03.633Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2832981 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2832728 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2832728 ']' 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2832728 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:36.809 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.810 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832728 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832728' 00:17:37.069 killing process with pid 2832728 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2832728 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2832728 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.069 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:37.069 "subsystems": [ 00:17:37.069 { 00:17:37.069 "subsystem": "keyring", 00:17:37.069 "config": [ 00:17:37.069 { 00:17:37.069 "method": "keyring_file_add_key", 00:17:37.069 "params": { 00:17:37.069 "name": "key0", 00:17:37.069 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:37.069 } 00:17:37.069 } 00:17:37.069 ] 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "subsystem": "iobuf", 00:17:37.069 "config": [ 00:17:37.069 { 00:17:37.069 "method": "iobuf_set_options", 00:17:37.069 "params": { 00:17:37.069 "small_pool_count": 8192, 00:17:37.069 "large_pool_count": 1024, 00:17:37.069 "small_bufsize": 8192, 00:17:37.069 "large_bufsize": 135168, 00:17:37.069 "enable_numa": false 00:17:37.069 } 00:17:37.069 } 00:17:37.069 ] 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "subsystem": "sock", 00:17:37.069 "config": [ 00:17:37.069 { 00:17:37.069 "method": "sock_set_default_impl", 00:17:37.069 "params": { 00:17:37.069 "impl_name": "posix" 00:17:37.069 } 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "method": "sock_impl_set_options", 00:17:37.069 "params": { 00:17:37.069 "impl_name": "ssl", 00:17:37.069 "recv_buf_size": 4096, 00:17:37.069 "send_buf_size": 4096, 00:17:37.069 "enable_recv_pipe": true, 00:17:37.069 "enable_quickack": false, 00:17:37.069 "enable_placement_id": 0, 00:17:37.069 "enable_zerocopy_send_server": true, 00:17:37.069 "enable_zerocopy_send_client": false, 00:17:37.069 "zerocopy_threshold": 0, 00:17:37.069 "tls_version": 0, 00:17:37.069 "enable_ktls": false 00:17:37.069 } 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "method": "sock_impl_set_options", 00:17:37.069 "params": { 00:17:37.069 "impl_name": "posix", 00:17:37.069 "recv_buf_size": 2097152, 00:17:37.069 "send_buf_size": 2097152, 00:17:37.069 "enable_recv_pipe": true, 00:17:37.069 "enable_quickack": false, 00:17:37.069 "enable_placement_id": 0, 00:17:37.069 "enable_zerocopy_send_server": true, 00:17:37.069 "enable_zerocopy_send_client": false, 00:17:37.069 "zerocopy_threshold": 0, 00:17:37.069 "tls_version": 0, 00:17:37.069 "enable_ktls": false 00:17:37.069 } 00:17:37.069 } 00:17:37.069 ] 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "subsystem": "vmd", 00:17:37.069 "config": [] 00:17:37.069 }, 00:17:37.069 { 00:17:37.069 "subsystem": "accel", 00:17:37.069 "config": [ 00:17:37.069 { 00:17:37.069 "method": "accel_set_options", 00:17:37.069 "params": { 00:17:37.069 "small_cache_size": 128, 00:17:37.069 "large_cache_size": 16, 00:17:37.069 "task_count": 2048, 00:17:37.069 "sequence_count": 2048, 00:17:37.070 "buf_count": 2048 00:17:37.070 } 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "subsystem": "bdev", 00:17:37.070 "config": [ 00:17:37.070 { 00:17:37.070 "method": "bdev_set_options", 00:17:37.070 "params": { 00:17:37.070 "bdev_io_pool_size": 65535, 00:17:37.070 "bdev_io_cache_size": 256, 00:17:37.070 "bdev_auto_examine": true, 00:17:37.070 "iobuf_small_cache_size": 128, 00:17:37.070 "iobuf_large_cache_size": 16 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_raid_set_options", 00:17:37.070 "params": { 00:17:37.070 "process_window_size_kb": 1024, 00:17:37.070 "process_max_bandwidth_mb_sec": 0 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_iscsi_set_options", 00:17:37.070 "params": { 00:17:37.070 "timeout_sec": 30 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_nvme_set_options", 00:17:37.070 "params": { 00:17:37.070 "action_on_timeout": "none", 00:17:37.070 "timeout_us": 0, 00:17:37.070 "timeout_admin_us": 0, 00:17:37.070 "keep_alive_timeout_ms": 10000, 00:17:37.070 "arbitration_burst": 0, 00:17:37.070 "low_priority_weight": 0, 00:17:37.070 "medium_priority_weight": 0, 00:17:37.070 "high_priority_weight": 0, 00:17:37.070 "nvme_adminq_poll_period_us": 10000, 00:17:37.070 "nvme_ioq_poll_period_us": 0, 00:17:37.070 "io_queue_requests": 0, 00:17:37.070 "delay_cmd_submit": true, 00:17:37.070 "transport_retry_count": 4, 00:17:37.070 "bdev_retry_count": 3, 00:17:37.070 "transport_ack_timeout": 0, 00:17:37.070 "ctrlr_loss_timeout_sec": 0, 00:17:37.070 "reconnect_delay_sec": 0, 00:17:37.070 "fast_io_fail_timeout_sec": 0, 00:17:37.070 "disable_auto_failback": false, 00:17:37.070 "generate_uuids": false, 00:17:37.070 "transport_tos": 0, 00:17:37.070 "nvme_error_stat": false, 00:17:37.070 "rdma_srq_size": 0, 00:17:37.070 "io_path_stat": false, 00:17:37.070 "allow_accel_sequence": false, 00:17:37.070 "rdma_max_cq_size": 0, 00:17:37.070 "rdma_cm_event_timeout_ms": 0, 00:17:37.070 "dhchap_digests": [ 00:17:37.070 "sha256", 00:17:37.070 "sha384", 00:17:37.070 "sha512" 00:17:37.070 ], 00:17:37.070 "dhchap_dhgroups": [ 00:17:37.070 "null", 00:17:37.070 "ffdhe2048", 00:17:37.070 "ffdhe3072", 00:17:37.070 "ffdhe4096", 00:17:37.070 "ffdhe6144", 00:17:37.070 "ffdhe8192" 00:17:37.070 ] 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_nvme_set_hotplug", 00:17:37.070 "params": { 00:17:37.070 "period_us": 100000, 00:17:37.070 "enable": false 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_malloc_create", 00:17:37.070 "params": { 00:17:37.070 "name": "malloc0", 00:17:37.070 "num_blocks": 8192, 00:17:37.070 "block_size": 4096, 00:17:37.070 "physical_block_size": 4096, 00:17:37.070 "uuid": "d6312892-d351-4271-af5c-5bc4793c700b", 00:17:37.070 "optimal_io_boundary": 0, 00:17:37.070 "md_size": 0, 00:17:37.070 "dif_type": 0, 00:17:37.070 "dif_is_head_of_md": false, 00:17:37.070 "dif_pi_format": 0 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "bdev_wait_for_examine" 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "subsystem": "nbd", 00:17:37.070 "config": [] 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "subsystem": "scheduler", 00:17:37.070 "config": [ 00:17:37.070 { 00:17:37.070 "method": "framework_set_scheduler", 00:17:37.070 "params": { 00:17:37.070 "name": "static" 00:17:37.070 } 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "subsystem": "nvmf", 00:17:37.070 "config": [ 00:17:37.070 { 00:17:37.070 "method": "nvmf_set_config", 00:17:37.070 "params": { 00:17:37.070 "discovery_filter": "match_any", 00:17:37.070 "admin_cmd_passthru": { 00:17:37.070 "identify_ctrlr": false 00:17:37.070 }, 00:17:37.070 "dhchap_digests": [ 00:17:37.070 "sha256", 00:17:37.070 "sha384", 00:17:37.070 "sha512" 00:17:37.070 ], 00:17:37.070 "dhchap_dhgroups": [ 00:17:37.070 "null", 00:17:37.070 "ffdhe2048", 00:17:37.070 "ffdhe3072", 00:17:37.070 "ffdhe4096", 00:17:37.070 "ffdhe6144", 00:17:37.070 "ffdhe8192" 00:17:37.070 ] 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_set_max_subsystems", 00:17:37.070 "params": { 00:17:37.070 "max_subsystems": 1024 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_set_crdt", 00:17:37.070 "params": { 00:17:37.070 "crdt1": 0, 00:17:37.070 "crdt2": 0, 00:17:37.070 "crdt3": 0 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_create_transport", 00:17:37.070 "params": { 00:17:37.070 "trtype": "TCP", 00:17:37.070 "max_queue_depth": 128, 00:17:37.070 "max_io_qpairs_per_ctrlr": 127, 00:17:37.070 "in_capsule_data_size": 4096, 00:17:37.070 "max_io_size": 131072, 00:17:37.070 "io_unit_size": 131072, 00:17:37.070 "max_aq_depth": 128, 00:17:37.070 "num_shared_buffers": 511, 00:17:37.070 "buf_cache_size": 4294967295, 00:17:37.070 "dif_insert_or_strip": false, 00:17:37.070 "zcopy": false, 00:17:37.070 "c2h_success": false, 00:17:37.070 "sock_priority": 0, 00:17:37.070 "abort_timeout_sec": 1, 00:17:37.070 "ack_timeout": 0, 00:17:37.070 "data_wr_pool_size": 0 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_create_subsystem", 00:17:37.070 "params": { 00:17:37.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.070 "allow_any_host": false, 00:17:37.070 "serial_number": "SPDK00000000000001", 00:17:37.070 "model_number": "SPDK bdev Controller", 00:17:37.070 "max_namespaces": 10, 00:17:37.070 "min_cntlid": 1, 00:17:37.070 "max_cntlid": 65519, 00:17:37.070 "ana_reporting": false 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_subsystem_add_host", 00:17:37.070 "params": { 00:17:37.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.070 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.070 "psk": "key0" 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_subsystem_add_ns", 00:17:37.070 "params": { 00:17:37.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.070 "namespace": { 00:17:37.070 "nsid": 1, 00:17:37.070 "bdev_name": "malloc0", 00:17:37.070 "nguid": "D6312892D3514271AF5C5BC4793C700B", 00:17:37.070 "uuid": "d6312892-d351-4271-af5c-5bc4793c700b", 00:17:37.070 "no_auto_visible": false 00:17:37.070 } 00:17:37.070 } 00:17:37.070 }, 00:17:37.070 { 00:17:37.070 "method": "nvmf_subsystem_add_listener", 00:17:37.070 "params": { 00:17:37.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.070 "listen_address": { 00:17:37.070 "trtype": "TCP", 00:17:37.070 "adrfam": "IPv4", 00:17:37.070 "traddr": "10.0.0.2", 00:17:37.070 "trsvcid": "4420" 00:17:37.070 }, 00:17:37.070 "secure_channel": true 00:17:37.070 } 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 } 00:17:37.070 ] 00:17:37.070 }' 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2833237 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2833237 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2833237 ']' 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.070 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.071 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.071 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.071 [2024-11-04 16:29:03.864564] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:37.071 [2024-11-04 16:29:03.864621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.328 [2024-11-04 16:29:03.930474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.328 [2024-11-04 16:29:03.966260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.328 [2024-11-04 16:29:03.966294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.328 [2024-11-04 16:29:03.966301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.328 [2024-11-04 16:29:03.966307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.328 [2024-11-04 16:29:03.966312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.328 [2024-11-04 16:29:03.966919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.587 [2024-11-04 16:29:04.179557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.587 [2024-11-04 16:29:04.211587] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.587 [2024-11-04 16:29:04.211807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2833482 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2833482 /var/tmp/bdevperf.sock 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2833482 ']' 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.154 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:38.154 "subsystems": [ 00:17:38.154 { 00:17:38.154 "subsystem": "keyring", 00:17:38.154 "config": [ 00:17:38.154 { 00:17:38.154 "method": "keyring_file_add_key", 00:17:38.154 "params": { 00:17:38.154 "name": "key0", 00:17:38.154 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:38.154 } 00:17:38.154 } 00:17:38.154 ] 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "subsystem": "iobuf", 00:17:38.154 "config": [ 00:17:38.154 { 00:17:38.154 "method": "iobuf_set_options", 00:17:38.154 "params": { 00:17:38.154 "small_pool_count": 8192, 00:17:38.154 "large_pool_count": 1024, 00:17:38.154 "small_bufsize": 8192, 00:17:38.154 "large_bufsize": 135168, 00:17:38.154 "enable_numa": false 00:17:38.154 } 00:17:38.154 } 00:17:38.154 ] 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "subsystem": "sock", 00:17:38.154 "config": [ 00:17:38.154 { 00:17:38.154 "method": "sock_set_default_impl", 00:17:38.154 "params": { 00:17:38.154 "impl_name": "posix" 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "sock_impl_set_options", 00:17:38.154 "params": { 00:17:38.154 "impl_name": "ssl", 00:17:38.154 "recv_buf_size": 4096, 00:17:38.154 "send_buf_size": 4096, 00:17:38.154 "enable_recv_pipe": true, 00:17:38.154 "enable_quickack": false, 00:17:38.154 "enable_placement_id": 0, 00:17:38.154 "enable_zerocopy_send_server": true, 00:17:38.154 "enable_zerocopy_send_client": false, 00:17:38.154 "zerocopy_threshold": 0, 00:17:38.154 "tls_version": 0, 00:17:38.154 "enable_ktls": false 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "sock_impl_set_options", 00:17:38.154 "params": { 00:17:38.154 "impl_name": "posix", 00:17:38.154 "recv_buf_size": 2097152, 00:17:38.154 "send_buf_size": 2097152, 00:17:38.154 "enable_recv_pipe": true, 00:17:38.154 "enable_quickack": false, 00:17:38.154 "enable_placement_id": 0, 00:17:38.154 "enable_zerocopy_send_server": true, 00:17:38.154 "enable_zerocopy_send_client": false, 00:17:38.154 "zerocopy_threshold": 0, 00:17:38.154 "tls_version": 0, 00:17:38.154 "enable_ktls": false 00:17:38.154 } 00:17:38.154 } 00:17:38.154 ] 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "subsystem": "vmd", 00:17:38.154 "config": [] 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "subsystem": "accel", 00:17:38.154 "config": [ 00:17:38.154 { 00:17:38.154 "method": "accel_set_options", 00:17:38.154 "params": { 00:17:38.154 "small_cache_size": 128, 00:17:38.154 "large_cache_size": 16, 00:17:38.154 "task_count": 2048, 00:17:38.154 "sequence_count": 2048, 00:17:38.154 "buf_count": 2048 00:17:38.154 } 00:17:38.154 } 00:17:38.154 ] 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "subsystem": "bdev", 00:17:38.154 "config": [ 00:17:38.154 { 00:17:38.154 "method": "bdev_set_options", 00:17:38.154 "params": { 00:17:38.154 "bdev_io_pool_size": 65535, 00:17:38.154 "bdev_io_cache_size": 256, 00:17:38.154 "bdev_auto_examine": true, 00:17:38.154 "iobuf_small_cache_size": 128, 00:17:38.154 "iobuf_large_cache_size": 16 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "bdev_raid_set_options", 00:17:38.154 "params": { 00:17:38.154 "process_window_size_kb": 1024, 00:17:38.154 "process_max_bandwidth_mb_sec": 0 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "bdev_iscsi_set_options", 00:17:38.154 "params": { 00:17:38.154 "timeout_sec": 30 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "bdev_nvme_set_options", 00:17:38.154 "params": { 00:17:38.154 "action_on_timeout": "none", 00:17:38.154 "timeout_us": 0, 00:17:38.154 "timeout_admin_us": 0, 00:17:38.154 "keep_alive_timeout_ms": 10000, 00:17:38.154 "arbitration_burst": 0, 00:17:38.154 "low_priority_weight": 0, 00:17:38.154 "medium_priority_weight": 0, 00:17:38.154 "high_priority_weight": 0, 00:17:38.154 "nvme_adminq_poll_period_us": 10000, 00:17:38.154 "nvme_ioq_poll_period_us": 0, 00:17:38.154 "io_queue_requests": 512, 00:17:38.154 "delay_cmd_submit": true, 00:17:38.154 "transport_retry_count": 4, 00:17:38.154 "bdev_retry_count": 3, 00:17:38.154 "transport_ack_timeout": 0, 00:17:38.154 "ctrlr_loss_timeout_sec": 0, 00:17:38.154 "reconnect_delay_sec": 0, 00:17:38.154 "fast_io_fail_timeout_sec": 0, 00:17:38.154 "disable_auto_failback": false, 00:17:38.154 "generate_uuids": false, 00:17:38.154 "transport_tos": 0, 00:17:38.154 "nvme_error_stat": false, 00:17:38.154 "rdma_srq_size": 0, 00:17:38.154 "io_path_stat": false, 00:17:38.154 "allow_accel_sequence": false, 00:17:38.154 "rdma_max_cq_size": 0, 00:17:38.154 "rdma_cm_event_timeout_ms": 0, 00:17:38.154 "dhchap_digests": [ 00:17:38.154 "sha256", 00:17:38.154 "sha384", 00:17:38.154 "sha512" 00:17:38.154 ], 00:17:38.154 "dhchap_dhgroups": [ 00:17:38.154 "null", 00:17:38.154 "ffdhe2048", 00:17:38.154 "ffdhe3072", 00:17:38.154 "ffdhe4096", 00:17:38.154 "ffdhe6144", 00:17:38.154 "ffdhe8192" 00:17:38.154 ] 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "bdev_nvme_attach_controller", 00:17:38.154 "params": { 00:17:38.154 "name": "TLSTEST", 00:17:38.154 "trtype": "TCP", 00:17:38.154 "adrfam": "IPv4", 00:17:38.154 "traddr": "10.0.0.2", 00:17:38.154 "trsvcid": "4420", 00:17:38.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.154 "prchk_reftag": false, 00:17:38.154 "prchk_guard": false, 00:17:38.154 "ctrlr_loss_timeout_sec": 0, 00:17:38.154 "reconnect_delay_sec": 0, 00:17:38.154 "fast_io_fail_timeout_sec": 0, 00:17:38.154 "psk": "key0", 00:17:38.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.154 "hdgst": false, 00:17:38.154 "ddgst": false, 00:17:38.154 "multipath": "multipath" 00:17:38.154 } 00:17:38.154 }, 00:17:38.154 { 00:17:38.154 "method": "bdev_nvme_set_hotplug", 00:17:38.154 "params": { 00:17:38.154 "period_us": 100000, 00:17:38.155 "enable": false 00:17:38.155 } 00:17:38.155 }, 00:17:38.155 { 00:17:38.155 "method": "bdev_wait_for_examine" 00:17:38.155 } 00:17:38.155 ] 00:17:38.155 }, 00:17:38.155 { 00:17:38.155 "subsystem": "nbd", 00:17:38.155 "config": [] 00:17:38.155 } 00:17:38.155 ] 00:17:38.155 }' 00:17:38.155 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.155 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.155 [2024-11-04 16:29:04.780872] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:38.155 [2024-11-04 16:29:04.780923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833482 ] 00:17:38.155 [2024-11-04 16:29:04.838428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.155 [2024-11-04 16:29:04.878316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.413 [2024-11-04 16:29:05.028922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.980 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.980 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.980 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:38.980 Running I/O for 10 seconds... 00:17:41.360 5143.00 IOPS, 20.09 MiB/s [2024-11-04T15:29:08.775Z] 5350.00 IOPS, 20.90 MiB/s [2024-11-04T15:29:10.149Z] 5396.00 IOPS, 21.08 MiB/s [2024-11-04T15:29:11.082Z] 5460.50 IOPS, 21.33 MiB/s [2024-11-04T15:29:12.016Z] 5478.40 IOPS, 21.40 MiB/s [2024-11-04T15:29:12.951Z] 5500.00 IOPS, 21.48 MiB/s [2024-11-04T15:29:13.885Z] 5524.57 IOPS, 21.58 MiB/s [2024-11-04T15:29:14.820Z] 5542.88 IOPS, 21.65 MiB/s [2024-11-04T15:29:15.758Z] 5531.11 IOPS, 21.61 MiB/s [2024-11-04T15:29:15.758Z] 5542.60 IOPS, 21.65 MiB/s 00:17:48.934 Latency(us) 00:17:48.934 [2024-11-04T15:29:15.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.934 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.934 Verification LBA range: start 0x0 length 0x2000 00:17:48.934 TLSTESTn1 : 10.01 5547.04 21.67 0.00 0.00 23039.26 4556.31 60168.29 00:17:48.934 [2024-11-04T15:29:15.758Z] =================================================================================================================== 00:17:48.934 [2024-11-04T15:29:15.758Z] Total : 5547.04 21.67 0.00 0.00 23039.26 4556.31 60168.29 00:17:48.934 { 00:17:48.934 "results": [ 00:17:48.934 { 00:17:48.934 "job": "TLSTESTn1", 00:17:48.934 "core_mask": "0x4", 00:17:48.934 "workload": "verify", 00:17:48.934 "status": "finished", 00:17:48.934 "verify_range": { 00:17:48.934 "start": 0, 00:17:48.934 "length": 8192 00:17:48.934 }, 00:17:48.934 "queue_depth": 128, 00:17:48.934 "io_size": 4096, 00:17:48.934 "runtime": 10.014706, 00:17:48.934 "iops": 5547.04251927116, 00:17:48.934 "mibps": 21.66813484090297, 00:17:48.934 "io_failed": 0, 00:17:48.934 "io_timeout": 0, 00:17:48.934 "avg_latency_us": 23039.255931533906, 00:17:48.934 "min_latency_us": 4556.312380952381, 00:17:48.934 "max_latency_us": 60168.28952380952 00:17:48.934 } 00:17:48.934 ], 00:17:48.934 "core_count": 1 00:17:48.934 } 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2833482 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2833482 ']' 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2833482 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833482 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833482' 00:17:49.192 killing process with pid 2833482 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2833482 00:17:49.192 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.192 00:17:49.192 Latency(us) 00:17:49.192 [2024-11-04T15:29:16.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.192 [2024-11-04T15:29:16.016Z] =================================================================================================================== 00:17:49.192 [2024-11-04T15:29:16.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2833482 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2833237 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2833237 ']' 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2833237 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.192 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833237 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833237' 00:17:49.451 killing process with pid 2833237 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2833237 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2833237 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.451 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2835328 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2835328 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2835328 ']' 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.452 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.452 [2024-11-04 16:29:16.243311] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:49.452 [2024-11-04 16:29:16.243359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.710 [2024-11-04 16:29:16.310017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.710 [2024-11-04 16:29:16.346105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.710 [2024-11-04 16:29:16.346140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.710 [2024-11-04 16:29:16.346147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.710 [2024-11-04 16:29:16.346156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.710 [2024-11-04 16:29:16.346160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.710 [2024-11-04 16:29:16.346744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.0e6TiEhu3u 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0e6TiEhu3u 00:17:49.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.968 [2024-11-04 16:29:16.637163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.968 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.226 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.226 [2024-11-04 16:29:17.026156] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.226 [2024-11-04 16:29:17.026356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.226 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.484 malloc0 00:17:50.484 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.742 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2835589 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2835589 /var/tmp/bdevperf.sock 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2835589 ']' 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.000 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.258 [2024-11-04 16:29:17.836223] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:51.258 [2024-11-04 16:29:17.836275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835589 ] 00:17:51.258 [2024-11-04 16:29:17.899434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.258 [2024-11-04 16:29:17.939984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.258 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.258 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:51.258 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:51.516 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:51.774 [2024-11-04 16:29:18.394733] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.774 nvme0n1 00:17:51.774 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:51.774 Running I/O for 1 seconds... 00:17:53.148 5323.00 IOPS, 20.79 MiB/s 00:17:53.148 Latency(us) 00:17:53.148 [2024-11-04T15:29:19.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.148 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:53.148 Verification LBA range: start 0x0 length 0x2000 00:17:53.148 nvme0n1 : 1.01 5375.51 21.00 0.00 0.00 23639.43 5242.88 29584.82 00:17:53.148 [2024-11-04T15:29:19.972Z] =================================================================================================================== 00:17:53.148 [2024-11-04T15:29:19.972Z] Total : 5375.51 21.00 0.00 0.00 23639.43 5242.88 29584.82 00:17:53.148 { 00:17:53.148 "results": [ 00:17:53.148 { 00:17:53.148 "job": "nvme0n1", 00:17:53.148 "core_mask": "0x2", 00:17:53.148 "workload": "verify", 00:17:53.148 "status": "finished", 00:17:53.148 "verify_range": { 00:17:53.148 "start": 0, 00:17:53.148 "length": 8192 00:17:53.148 }, 00:17:53.148 "queue_depth": 128, 00:17:53.148 "io_size": 4096, 00:17:53.148 "runtime": 1.014229, 00:17:53.148 "iops": 5375.511842000179, 00:17:53.148 "mibps": 20.9980931328132, 00:17:53.148 "io_failed": 0, 00:17:53.148 "io_timeout": 0, 00:17:53.148 "avg_latency_us": 23639.43370855606, 00:17:53.148 "min_latency_us": 5242.88, 00:17:53.148 "max_latency_us": 29584.822857142855 00:17:53.148 } 00:17:53.148 ], 00:17:53.148 "core_count": 1 00:17:53.148 } 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2835589 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2835589 ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2835589 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835589 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835589' 00:17:53.148 killing process with pid 2835589 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2835589 00:17:53.148 Received shutdown signal, test time was about 1.000000 seconds 00:17:53.148 00:17:53.148 Latency(us) 00:17:53.148 [2024-11-04T15:29:19.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.148 [2024-11-04T15:29:19.972Z] =================================================================================================================== 00:17:53.148 [2024-11-04T15:29:19.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2835589 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2835328 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2835328 ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2835328 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835328 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835328' 00:17:53.148 killing process with pid 2835328 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2835328 00:17:53.148 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2835328 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2836054 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2836054 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2836054 ']' 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.407 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.407 [2024-11-04 16:29:20.077740] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:53.407 [2024-11-04 16:29:20.077786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.407 [2024-11-04 16:29:20.143701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.407 [2024-11-04 16:29:20.182772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.407 [2024-11-04 16:29:20.182803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.407 [2024-11-04 16:29:20.182811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.407 [2024-11-04 16:29:20.182819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.407 [2024-11-04 16:29:20.182824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.407 [2024-11-04 16:29:20.183371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.666 [2024-11-04 16:29:20.322667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.666 malloc0 00:17:53.666 [2024-11-04 16:29:20.350821] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.666 [2024-11-04 16:29:20.351035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2836073 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2836073 /var/tmp/bdevperf.sock 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2836073 ']' 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.666 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.666 [2024-11-04 16:29:20.423781] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:53.666 [2024-11-04 16:29:20.423822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836073 ] 00:17:53.666 [2024-11-04 16:29:20.485885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.925 [2024-11-04 16:29:20.526356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.925 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.925 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.925 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0e6TiEhu3u 00:17:54.183 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.183 [2024-11-04 16:29:20.993100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.440 nvme0n1 00:17:54.440 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.440 Running I/O for 1 seconds... 00:17:55.375 5443.00 IOPS, 21.26 MiB/s 00:17:55.375 Latency(us) 00:17:55.375 [2024-11-04T15:29:22.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.375 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:55.375 Verification LBA range: start 0x0 length 0x2000 00:17:55.375 nvme0n1 : 1.01 5493.99 21.46 0.00 0.00 23130.38 4868.39 23468.13 00:17:55.375 [2024-11-04T15:29:22.199Z] =================================================================================================================== 00:17:55.375 [2024-11-04T15:29:22.199Z] Total : 5493.99 21.46 0.00 0.00 23130.38 4868.39 23468.13 00:17:55.375 { 00:17:55.375 "results": [ 00:17:55.375 { 00:17:55.375 "job": "nvme0n1", 00:17:55.375 "core_mask": "0x2", 00:17:55.375 "workload": "verify", 00:17:55.375 "status": "finished", 00:17:55.375 "verify_range": { 00:17:55.375 "start": 0, 00:17:55.375 "length": 8192 00:17:55.375 }, 00:17:55.375 "queue_depth": 128, 00:17:55.375 "io_size": 4096, 00:17:55.375 "runtime": 1.014017, 00:17:55.375 "iops": 5493.990731910806, 00:17:55.375 "mibps": 21.460901296526586, 00:17:55.375 "io_failed": 0, 00:17:55.375 "io_timeout": 0, 00:17:55.375 "avg_latency_us": 23130.380291133504, 00:17:55.375 "min_latency_us": 4868.388571428572, 00:17:55.375 "max_latency_us": 23468.129523809523 00:17:55.375 } 00:17:55.375 ], 00:17:55.375 "core_count": 1 00:17:55.375 } 00:17:55.633 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:55.633 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.633 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.633 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.633 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:55.633 "subsystems": [ 00:17:55.633 { 00:17:55.633 "subsystem": "keyring", 00:17:55.633 "config": [ 00:17:55.633 { 00:17:55.633 "method": "keyring_file_add_key", 00:17:55.633 "params": { 00:17:55.633 "name": "key0", 00:17:55.633 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:55.633 } 00:17:55.633 } 00:17:55.633 ] 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "subsystem": "iobuf", 00:17:55.633 "config": [ 00:17:55.633 { 00:17:55.633 "method": "iobuf_set_options", 00:17:55.633 "params": { 00:17:55.633 "small_pool_count": 8192, 00:17:55.633 "large_pool_count": 1024, 00:17:55.633 "small_bufsize": 8192, 00:17:55.633 "large_bufsize": 135168, 00:17:55.633 "enable_numa": false 00:17:55.633 } 00:17:55.633 } 00:17:55.633 ] 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "subsystem": "sock", 00:17:55.633 "config": [ 00:17:55.633 { 00:17:55.633 "method": "sock_set_default_impl", 00:17:55.633 "params": { 00:17:55.633 "impl_name": "posix" 00:17:55.633 } 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "method": "sock_impl_set_options", 00:17:55.633 "params": { 00:17:55.633 "impl_name": "ssl", 00:17:55.633 "recv_buf_size": 4096, 00:17:55.633 "send_buf_size": 4096, 00:17:55.633 "enable_recv_pipe": true, 00:17:55.633 "enable_quickack": false, 00:17:55.633 "enable_placement_id": 0, 00:17:55.633 "enable_zerocopy_send_server": true, 00:17:55.633 "enable_zerocopy_send_client": false, 00:17:55.633 "zerocopy_threshold": 0, 00:17:55.633 "tls_version": 0, 00:17:55.633 "enable_ktls": false 00:17:55.633 } 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "method": "sock_impl_set_options", 00:17:55.633 "params": { 00:17:55.633 "impl_name": "posix", 00:17:55.633 "recv_buf_size": 2097152, 00:17:55.633 "send_buf_size": 2097152, 00:17:55.633 "enable_recv_pipe": true, 00:17:55.633 "enable_quickack": false, 00:17:55.633 "enable_placement_id": 0, 00:17:55.633 "enable_zerocopy_send_server": true, 00:17:55.633 "enable_zerocopy_send_client": false, 00:17:55.633 "zerocopy_threshold": 0, 00:17:55.633 "tls_version": 0, 00:17:55.633 "enable_ktls": false 00:17:55.633 } 00:17:55.633 } 00:17:55.633 ] 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "subsystem": "vmd", 00:17:55.633 "config": [] 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "subsystem": "accel", 00:17:55.633 "config": [ 00:17:55.633 { 00:17:55.633 "method": "accel_set_options", 00:17:55.633 "params": { 00:17:55.633 "small_cache_size": 128, 00:17:55.633 "large_cache_size": 16, 00:17:55.633 "task_count": 2048, 00:17:55.633 "sequence_count": 2048, 00:17:55.633 "buf_count": 2048 00:17:55.633 } 00:17:55.633 } 00:17:55.633 ] 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "subsystem": "bdev", 00:17:55.633 "config": [ 00:17:55.633 { 00:17:55.633 "method": "bdev_set_options", 00:17:55.633 "params": { 00:17:55.633 "bdev_io_pool_size": 65535, 00:17:55.633 "bdev_io_cache_size": 256, 00:17:55.633 "bdev_auto_examine": true, 00:17:55.633 "iobuf_small_cache_size": 128, 00:17:55.633 "iobuf_large_cache_size": 16 00:17:55.633 } 00:17:55.633 }, 00:17:55.633 { 00:17:55.633 "method": "bdev_raid_set_options", 00:17:55.633 "params": { 00:17:55.633 "process_window_size_kb": 1024, 00:17:55.633 "process_max_bandwidth_mb_sec": 0 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "bdev_iscsi_set_options", 00:17:55.634 "params": { 00:17:55.634 "timeout_sec": 30 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "bdev_nvme_set_options", 00:17:55.634 "params": { 00:17:55.634 "action_on_timeout": "none", 00:17:55.634 "timeout_us": 0, 00:17:55.634 "timeout_admin_us": 0, 00:17:55.634 "keep_alive_timeout_ms": 10000, 00:17:55.634 "arbitration_burst": 0, 00:17:55.634 "low_priority_weight": 0, 00:17:55.634 "medium_priority_weight": 0, 00:17:55.634 "high_priority_weight": 0, 00:17:55.634 "nvme_adminq_poll_period_us": 10000, 00:17:55.634 "nvme_ioq_poll_period_us": 0, 00:17:55.634 "io_queue_requests": 0, 00:17:55.634 "delay_cmd_submit": true, 00:17:55.634 "transport_retry_count": 4, 00:17:55.634 "bdev_retry_count": 3, 00:17:55.634 "transport_ack_timeout": 0, 00:17:55.634 "ctrlr_loss_timeout_sec": 0, 00:17:55.634 "reconnect_delay_sec": 0, 00:17:55.634 "fast_io_fail_timeout_sec": 0, 00:17:55.634 "disable_auto_failback": false, 00:17:55.634 "generate_uuids": false, 00:17:55.634 "transport_tos": 0, 00:17:55.634 "nvme_error_stat": false, 00:17:55.634 "rdma_srq_size": 0, 00:17:55.634 "io_path_stat": false, 00:17:55.634 "allow_accel_sequence": false, 00:17:55.634 "rdma_max_cq_size": 0, 00:17:55.634 "rdma_cm_event_timeout_ms": 0, 00:17:55.634 "dhchap_digests": [ 00:17:55.634 "sha256", 00:17:55.634 "sha384", 00:17:55.634 "sha512" 00:17:55.634 ], 00:17:55.634 "dhchap_dhgroups": [ 00:17:55.634 "null", 00:17:55.634 "ffdhe2048", 00:17:55.634 "ffdhe3072", 00:17:55.634 "ffdhe4096", 00:17:55.634 "ffdhe6144", 00:17:55.634 "ffdhe8192" 00:17:55.634 ] 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "bdev_nvme_set_hotplug", 00:17:55.634 "params": { 00:17:55.634 "period_us": 100000, 00:17:55.634 "enable": false 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "bdev_malloc_create", 00:17:55.634 "params": { 00:17:55.634 "name": "malloc0", 00:17:55.634 "num_blocks": 8192, 00:17:55.634 "block_size": 4096, 00:17:55.634 "physical_block_size": 4096, 00:17:55.634 "uuid": "ff45ef5c-2572-4b6e-bc16-caccc45a2746", 00:17:55.634 "optimal_io_boundary": 0, 00:17:55.634 "md_size": 0, 00:17:55.634 "dif_type": 0, 00:17:55.634 "dif_is_head_of_md": false, 00:17:55.634 "dif_pi_format": 0 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "bdev_wait_for_examine" 00:17:55.634 } 00:17:55.634 ] 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "subsystem": "nbd", 00:17:55.634 "config": [] 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "subsystem": "scheduler", 00:17:55.634 "config": [ 00:17:55.634 { 00:17:55.634 "method": "framework_set_scheduler", 00:17:55.634 "params": { 00:17:55.634 "name": "static" 00:17:55.634 } 00:17:55.634 } 00:17:55.634 ] 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "subsystem": "nvmf", 00:17:55.634 "config": [ 00:17:55.634 { 00:17:55.634 "method": "nvmf_set_config", 00:17:55.634 "params": { 00:17:55.634 "discovery_filter": "match_any", 00:17:55.634 "admin_cmd_passthru": { 00:17:55.634 "identify_ctrlr": false 00:17:55.634 }, 00:17:55.634 "dhchap_digests": [ 00:17:55.634 "sha256", 00:17:55.634 "sha384", 00:17:55.634 "sha512" 00:17:55.634 ], 00:17:55.634 "dhchap_dhgroups": [ 00:17:55.634 "null", 00:17:55.634 "ffdhe2048", 00:17:55.634 "ffdhe3072", 00:17:55.634 "ffdhe4096", 00:17:55.634 "ffdhe6144", 00:17:55.634 "ffdhe8192" 00:17:55.634 ] 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_set_max_subsystems", 00:17:55.634 "params": { 00:17:55.634 "max_subsystems": 1024 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_set_crdt", 00:17:55.634 "params": { 00:17:55.634 "crdt1": 0, 00:17:55.634 "crdt2": 0, 00:17:55.634 "crdt3": 0 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_create_transport", 00:17:55.634 "params": { 00:17:55.634 "trtype": "TCP", 00:17:55.634 "max_queue_depth": 128, 00:17:55.634 "max_io_qpairs_per_ctrlr": 127, 00:17:55.634 "in_capsule_data_size": 4096, 00:17:55.634 "max_io_size": 131072, 00:17:55.634 "io_unit_size": 131072, 00:17:55.634 "max_aq_depth": 128, 00:17:55.634 "num_shared_buffers": 511, 00:17:55.634 "buf_cache_size": 4294967295, 00:17:55.634 "dif_insert_or_strip": false, 00:17:55.634 "zcopy": false, 00:17:55.634 "c2h_success": false, 00:17:55.634 "sock_priority": 0, 00:17:55.634 "abort_timeout_sec": 1, 00:17:55.634 "ack_timeout": 0, 00:17:55.634 "data_wr_pool_size": 0 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_create_subsystem", 00:17:55.634 "params": { 00:17:55.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.634 "allow_any_host": false, 00:17:55.634 "serial_number": "00000000000000000000", 00:17:55.634 "model_number": "SPDK bdev Controller", 00:17:55.634 "max_namespaces": 32, 00:17:55.634 "min_cntlid": 1, 00:17:55.634 "max_cntlid": 65519, 00:17:55.634 "ana_reporting": false 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_subsystem_add_host", 00:17:55.634 "params": { 00:17:55.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.634 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.634 "psk": "key0" 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_subsystem_add_ns", 00:17:55.634 "params": { 00:17:55.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.634 "namespace": { 00:17:55.634 "nsid": 1, 00:17:55.634 "bdev_name": "malloc0", 00:17:55.634 "nguid": "FF45EF5C25724B6EBC16CACCC45A2746", 00:17:55.634 "uuid": "ff45ef5c-2572-4b6e-bc16-caccc45a2746", 00:17:55.634 "no_auto_visible": false 00:17:55.634 } 00:17:55.634 } 00:17:55.634 }, 00:17:55.634 { 00:17:55.634 "method": "nvmf_subsystem_add_listener", 00:17:55.634 "params": { 00:17:55.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.634 "listen_address": { 00:17:55.634 "trtype": "TCP", 00:17:55.634 "adrfam": "IPv4", 00:17:55.634 "traddr": "10.0.0.2", 00:17:55.634 "trsvcid": "4420" 00:17:55.634 }, 00:17:55.634 "secure_channel": false, 00:17:55.634 "sock_impl": "ssl" 00:17:55.634 } 00:17:55.634 } 00:17:55.634 ] 00:17:55.634 } 00:17:55.634 ] 00:17:55.634 }' 00:17:55.634 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:55.894 "subsystems": [ 00:17:55.894 { 00:17:55.894 "subsystem": "keyring", 00:17:55.894 "config": [ 00:17:55.894 { 00:17:55.894 "method": "keyring_file_add_key", 00:17:55.894 "params": { 00:17:55.894 "name": "key0", 00:17:55.894 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:55.894 } 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "iobuf", 00:17:55.894 "config": [ 00:17:55.894 { 00:17:55.894 "method": "iobuf_set_options", 00:17:55.894 "params": { 00:17:55.894 "small_pool_count": 8192, 00:17:55.894 "large_pool_count": 1024, 00:17:55.894 "small_bufsize": 8192, 00:17:55.894 "large_bufsize": 135168, 00:17:55.894 "enable_numa": false 00:17:55.894 } 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "sock", 00:17:55.894 "config": [ 00:17:55.894 { 00:17:55.894 "method": "sock_set_default_impl", 00:17:55.894 "params": { 00:17:55.894 "impl_name": "posix" 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "sock_impl_set_options", 00:17:55.894 "params": { 00:17:55.894 "impl_name": "ssl", 00:17:55.894 "recv_buf_size": 4096, 00:17:55.894 "send_buf_size": 4096, 00:17:55.894 "enable_recv_pipe": true, 00:17:55.894 "enable_quickack": false, 00:17:55.894 "enable_placement_id": 0, 00:17:55.894 "enable_zerocopy_send_server": true, 00:17:55.894 "enable_zerocopy_send_client": false, 00:17:55.894 "zerocopy_threshold": 0, 00:17:55.894 "tls_version": 0, 00:17:55.894 "enable_ktls": false 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "sock_impl_set_options", 00:17:55.894 "params": { 00:17:55.894 "impl_name": "posix", 00:17:55.894 "recv_buf_size": 2097152, 00:17:55.894 "send_buf_size": 2097152, 00:17:55.894 "enable_recv_pipe": true, 00:17:55.894 "enable_quickack": false, 00:17:55.894 "enable_placement_id": 0, 00:17:55.894 "enable_zerocopy_send_server": true, 00:17:55.894 "enable_zerocopy_send_client": false, 00:17:55.894 "zerocopy_threshold": 0, 00:17:55.894 "tls_version": 0, 00:17:55.894 "enable_ktls": false 00:17:55.894 } 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "vmd", 00:17:55.894 "config": [] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "accel", 00:17:55.894 "config": [ 00:17:55.894 { 00:17:55.894 "method": "accel_set_options", 00:17:55.894 "params": { 00:17:55.894 "small_cache_size": 128, 00:17:55.894 "large_cache_size": 16, 00:17:55.894 "task_count": 2048, 00:17:55.894 "sequence_count": 2048, 00:17:55.894 "buf_count": 2048 00:17:55.894 } 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "bdev", 00:17:55.894 "config": [ 00:17:55.894 { 00:17:55.894 "method": "bdev_set_options", 00:17:55.894 "params": { 00:17:55.894 "bdev_io_pool_size": 65535, 00:17:55.894 "bdev_io_cache_size": 256, 00:17:55.894 "bdev_auto_examine": true, 00:17:55.894 "iobuf_small_cache_size": 128, 00:17:55.894 "iobuf_large_cache_size": 16 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_raid_set_options", 00:17:55.894 "params": { 00:17:55.894 "process_window_size_kb": 1024, 00:17:55.894 "process_max_bandwidth_mb_sec": 0 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_iscsi_set_options", 00:17:55.894 "params": { 00:17:55.894 "timeout_sec": 30 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_nvme_set_options", 00:17:55.894 "params": { 00:17:55.894 "action_on_timeout": "none", 00:17:55.894 "timeout_us": 0, 00:17:55.894 "timeout_admin_us": 0, 00:17:55.894 "keep_alive_timeout_ms": 10000, 00:17:55.894 "arbitration_burst": 0, 00:17:55.894 "low_priority_weight": 0, 00:17:55.894 "medium_priority_weight": 0, 00:17:55.894 "high_priority_weight": 0, 00:17:55.894 "nvme_adminq_poll_period_us": 10000, 00:17:55.894 "nvme_ioq_poll_period_us": 0, 00:17:55.894 "io_queue_requests": 512, 00:17:55.894 "delay_cmd_submit": true, 00:17:55.894 "transport_retry_count": 4, 00:17:55.894 "bdev_retry_count": 3, 00:17:55.894 "transport_ack_timeout": 0, 00:17:55.894 "ctrlr_loss_timeout_sec": 0, 00:17:55.894 "reconnect_delay_sec": 0, 00:17:55.894 "fast_io_fail_timeout_sec": 0, 00:17:55.894 "disable_auto_failback": false, 00:17:55.894 "generate_uuids": false, 00:17:55.894 "transport_tos": 0, 00:17:55.894 "nvme_error_stat": false, 00:17:55.894 "rdma_srq_size": 0, 00:17:55.894 "io_path_stat": false, 00:17:55.894 "allow_accel_sequence": false, 00:17:55.894 "rdma_max_cq_size": 0, 00:17:55.894 "rdma_cm_event_timeout_ms": 0, 00:17:55.894 "dhchap_digests": [ 00:17:55.894 "sha256", 00:17:55.894 "sha384", 00:17:55.894 "sha512" 00:17:55.894 ], 00:17:55.894 "dhchap_dhgroups": [ 00:17:55.894 "null", 00:17:55.894 "ffdhe2048", 00:17:55.894 "ffdhe3072", 00:17:55.894 "ffdhe4096", 00:17:55.894 "ffdhe6144", 00:17:55.894 "ffdhe8192" 00:17:55.894 ] 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_nvme_attach_controller", 00:17:55.894 "params": { 00:17:55.894 "name": "nvme0", 00:17:55.894 "trtype": "TCP", 00:17:55.894 "adrfam": "IPv4", 00:17:55.894 "traddr": "10.0.0.2", 00:17:55.894 "trsvcid": "4420", 00:17:55.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.894 "prchk_reftag": false, 00:17:55.894 "prchk_guard": false, 00:17:55.894 "ctrlr_loss_timeout_sec": 0, 00:17:55.894 "reconnect_delay_sec": 0, 00:17:55.894 "fast_io_fail_timeout_sec": 0, 00:17:55.894 "psk": "key0", 00:17:55.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.894 "hdgst": false, 00:17:55.894 "ddgst": false, 00:17:55.894 "multipath": "multipath" 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_nvme_set_hotplug", 00:17:55.894 "params": { 00:17:55.894 "period_us": 100000, 00:17:55.894 "enable": false 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_enable_histogram", 00:17:55.894 "params": { 00:17:55.894 "name": "nvme0n1", 00:17:55.894 "enable": true 00:17:55.894 } 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "method": "bdev_wait_for_examine" 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }, 00:17:55.894 { 00:17:55.894 "subsystem": "nbd", 00:17:55.894 "config": [] 00:17:55.894 } 00:17:55.894 ] 00:17:55.894 }' 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2836073 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2836073 ']' 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2836073 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836073 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836073' 00:17:55.894 killing process with pid 2836073 00:17:55.894 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2836073 00:17:55.895 Received shutdown signal, test time was about 1.000000 seconds 00:17:55.895 00:17:55.895 Latency(us) 00:17:55.895 [2024-11-04T15:29:22.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.895 [2024-11-04T15:29:22.719Z] =================================================================================================================== 00:17:55.895 [2024-11-04T15:29:22.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.895 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2836073 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2836054 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2836054 ']' 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2836054 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836054 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836054' 00:17:56.153 killing process with pid 2836054 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2836054 00:17:56.153 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2836054 00:17:56.412 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:56.412 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.412 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.412 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:56.412 "subsystems": [ 00:17:56.412 { 00:17:56.412 "subsystem": "keyring", 00:17:56.412 "config": [ 00:17:56.412 { 00:17:56.412 "method": "keyring_file_add_key", 00:17:56.412 "params": { 00:17:56.412 "name": "key0", 00:17:56.412 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:56.412 } 00:17:56.412 } 00:17:56.412 ] 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "subsystem": "iobuf", 00:17:56.412 "config": [ 00:17:56.412 { 00:17:56.412 "method": "iobuf_set_options", 00:17:56.412 "params": { 00:17:56.412 "small_pool_count": 8192, 00:17:56.412 "large_pool_count": 1024, 00:17:56.412 "small_bufsize": 8192, 00:17:56.412 "large_bufsize": 135168, 00:17:56.412 "enable_numa": false 00:17:56.412 } 00:17:56.412 } 00:17:56.412 ] 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "subsystem": "sock", 00:17:56.412 "config": [ 00:17:56.412 { 00:17:56.412 "method": "sock_set_default_impl", 00:17:56.412 "params": { 00:17:56.412 "impl_name": "posix" 00:17:56.412 } 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "method": "sock_impl_set_options", 00:17:56.412 "params": { 00:17:56.412 "impl_name": "ssl", 00:17:56.412 "recv_buf_size": 4096, 00:17:56.412 "send_buf_size": 4096, 00:17:56.412 "enable_recv_pipe": true, 00:17:56.412 "enable_quickack": false, 00:17:56.412 "enable_placement_id": 0, 00:17:56.412 "enable_zerocopy_send_server": true, 00:17:56.412 "enable_zerocopy_send_client": false, 00:17:56.412 "zerocopy_threshold": 0, 00:17:56.412 "tls_version": 0, 00:17:56.412 "enable_ktls": false 00:17:56.412 } 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "method": "sock_impl_set_options", 00:17:56.412 "params": { 00:17:56.412 "impl_name": "posix", 00:17:56.412 "recv_buf_size": 2097152, 00:17:56.412 "send_buf_size": 2097152, 00:17:56.412 "enable_recv_pipe": true, 00:17:56.412 "enable_quickack": false, 00:17:56.412 "enable_placement_id": 0, 00:17:56.412 "enable_zerocopy_send_server": true, 00:17:56.412 "enable_zerocopy_send_client": false, 00:17:56.412 "zerocopy_threshold": 0, 00:17:56.412 "tls_version": 0, 00:17:56.412 "enable_ktls": false 00:17:56.412 } 00:17:56.412 } 00:17:56.412 ] 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "subsystem": "vmd", 00:17:56.412 "config": [] 00:17:56.412 }, 00:17:56.412 { 00:17:56.412 "subsystem": "accel", 00:17:56.412 "config": [ 00:17:56.412 { 00:17:56.413 "method": "accel_set_options", 00:17:56.413 "params": { 00:17:56.413 "small_cache_size": 128, 00:17:56.413 "large_cache_size": 16, 00:17:56.413 "task_count": 2048, 00:17:56.413 "sequence_count": 2048, 00:17:56.413 "buf_count": 2048 00:17:56.413 } 00:17:56.413 } 00:17:56.413 ] 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "subsystem": "bdev", 00:17:56.413 "config": [ 00:17:56.413 { 00:17:56.413 "method": "bdev_set_options", 00:17:56.413 "params": { 00:17:56.413 "bdev_io_pool_size": 65535, 00:17:56.413 "bdev_io_cache_size": 256, 00:17:56.413 "bdev_auto_examine": true, 00:17:56.413 "iobuf_small_cache_size": 128, 00:17:56.413 "iobuf_large_cache_size": 16 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_raid_set_options", 00:17:56.413 "params": { 00:17:56.413 "process_window_size_kb": 1024, 00:17:56.413 "process_max_bandwidth_mb_sec": 0 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_iscsi_set_options", 00:17:56.413 "params": { 00:17:56.413 "timeout_sec": 30 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_nvme_set_options", 00:17:56.413 "params": { 00:17:56.413 "action_on_timeout": "none", 00:17:56.413 "timeout_us": 0, 00:17:56.413 "timeout_admin_us": 0, 00:17:56.413 "keep_alive_timeout_ms": 10000, 00:17:56.413 "arbitration_burst": 0, 00:17:56.413 "low_priority_weight": 0, 00:17:56.413 "medium_priority_weight": 0, 00:17:56.413 "high_priority_weight": 0, 00:17:56.413 "nvme_adminq_poll_period_us": 10000, 00:17:56.413 "nvme_ioq_poll_period_us": 0, 00:17:56.413 "io_queue_requests": 0, 00:17:56.413 "delay_cmd_submit": true, 00:17:56.413 "transport_retry_count": 4, 00:17:56.413 "bdev_retry_count": 3, 00:17:56.413 "transport_ack_timeout": 0, 00:17:56.413 "ctrlr_loss_timeout_sec": 0, 00:17:56.413 "reconnect_delay_sec": 0, 00:17:56.413 "fast_io_fail_timeout_sec": 0, 00:17:56.413 "disable_auto_failback": false, 00:17:56.413 "generate_uuids": false, 00:17:56.413 "transport_tos": 0, 00:17:56.413 "nvme_error_stat": false, 00:17:56.413 "rdma_srq_size": 0, 00:17:56.413 "io_path_stat": false, 00:17:56.413 "allow_accel_sequence": false, 00:17:56.413 "rdma_max_cq_size": 0, 00:17:56.413 "rdma_cm_event_timeout_ms": 0, 00:17:56.413 "dhchap_digests": [ 00:17:56.413 "sha256", 00:17:56.413 "sha384", 00:17:56.413 "sha512" 00:17:56.413 ], 00:17:56.413 "dhchap_dhgroups": [ 00:17:56.413 "null", 00:17:56.413 "ffdhe2048", 00:17:56.413 "ffdhe3072", 00:17:56.413 "ffdhe4096", 00:17:56.413 "ffdhe6144", 00:17:56.413 "ffdhe8192" 00:17:56.413 ] 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_nvme_set_hotplug", 00:17:56.413 "params": { 00:17:56.413 "period_us": 100000, 00:17:56.413 "enable": false 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_malloc_create", 00:17:56.413 "params": { 00:17:56.413 "name": "malloc0", 00:17:56.413 "num_blocks": 8192, 00:17:56.413 "block_size": 4096, 00:17:56.413 "physical_block_size": 4096, 00:17:56.413 "uuid": "ff45ef5c-2572-4b6e-bc16-caccc45a2746", 00:17:56.413 "optimal_io_boundary": 0, 00:17:56.413 "md_size": 0, 00:17:56.413 "dif_type": 0, 00:17:56.413 "dif_is_head_of_md": false, 00:17:56.413 "dif_pi_format": 0 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "bdev_wait_for_examine" 00:17:56.413 } 00:17:56.413 ] 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "subsystem": "nbd", 00:17:56.413 "config": [] 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "subsystem": "scheduler", 00:17:56.413 "config": [ 00:17:56.413 { 00:17:56.413 "method": "framework_set_scheduler", 00:17:56.413 "params": { 00:17:56.413 "name": "static" 00:17:56.413 } 00:17:56.413 } 00:17:56.413 ] 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "subsystem": "nvmf", 00:17:56.413 "config": [ 00:17:56.413 { 00:17:56.413 "method": "nvmf_set_config", 00:17:56.413 "params": { 00:17:56.413 "discovery_filter": "match_any", 00:17:56.413 "admin_cmd_passthru": { 00:17:56.413 "identify_ctrlr": false 00:17:56.413 }, 00:17:56.413 "dhchap_digests": [ 00:17:56.413 "sha256", 00:17:56.413 "sha384", 00:17:56.413 "sha512" 00:17:56.413 ], 00:17:56.413 "dhchap_dhgroups": [ 00:17:56.413 "null", 00:17:56.413 "ffdhe2048", 00:17:56.413 "ffdhe3072", 00:17:56.413 "ffdhe4096", 00:17:56.413 "ffdhe6144", 00:17:56.413 "ffdhe8192" 00:17:56.413 ] 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_set_max_subsystems", 00:17:56.413 "params": { 00:17:56.413 "max_subsystems": 1024 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_set_crdt", 00:17:56.413 "params": { 00:17:56.413 "crdt1": 0, 00:17:56.413 "crdt2": 0, 00:17:56.413 "crdt3": 0 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_create_transport", 00:17:56.413 "params": { 00:17:56.413 "trtype": "TCP", 00:17:56.413 "max_queue_depth": 128, 00:17:56.413 "max_io_qpairs_per_ctrlr": 127, 00:17:56.413 "in_capsule_data_size": 4096, 00:17:56.413 "max_io_size": 131072, 00:17:56.413 "io_unit_size": 131072, 00:17:56.413 "max_aq_depth": 128, 00:17:56.413 "num_shared_buffers": 511, 00:17:56.413 "buf_cache_size": 4294967295, 00:17:56.413 "dif_insert_or_strip": false, 00:17:56.413 "zcopy": false, 00:17:56.413 "c2h_success": false, 00:17:56.413 "sock_priority": 0, 00:17:56.413 "abort_timeout_sec": 1, 00:17:56.413 "ack_timeout": 0, 00:17:56.413 "data_wr_pool_size": 0 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_create_subsystem", 00:17:56.413 "params": { 00:17:56.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.413 "allow_any_host": false, 00:17:56.413 "serial_number": "00000000000000000000", 00:17:56.413 "model_number": "SPDK bdev Controller", 00:17:56.413 "max_namespaces": 32, 00:17:56.413 "min_cntlid": 1, 00:17:56.413 "max_cntlid": 65519, 00:17:56.413 "ana_reporting": false 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_subsystem_add_host", 00:17:56.413 "params": { 00:17:56.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.413 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.413 "psk": "key0" 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_subsystem_add_ns", 00:17:56.413 "params": { 00:17:56.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.413 "namespace": { 00:17:56.413 "nsid": 1, 00:17:56.413 "bdev_name": "malloc0", 00:17:56.413 "nguid": "FF45EF5C25724B6EBC16CACCC45A2746", 00:17:56.413 "uuid": "ff45ef5c-2572-4b6e-bc16-caccc45a2746", 00:17:56.413 "no_auto_visible": false 00:17:56.413 } 00:17:56.413 } 00:17:56.413 }, 00:17:56.413 { 00:17:56.413 "method": "nvmf_subsystem_add_listener", 00:17:56.413 "params": { 00:17:56.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.413 "listen_address": { 00:17:56.413 "trtype": "TCP", 00:17:56.413 "adrfam": "IPv4", 00:17:56.413 "traddr": "10.0.0.2", 00:17:56.413 "trsvcid": "4420" 00:17:56.413 }, 00:17:56.413 "secure_channel": false, 00:17:56.413 "sock_impl": "ssl" 00:17:56.413 } 00:17:56.413 } 00:17:56.413 ] 00:17:56.413 } 00:17:56.413 ] 00:17:56.413 }' 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2836550 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2836550 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2836550 ']' 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.413 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.413 [2024-11-04 16:29:23.063009] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:56.413 [2024-11-04 16:29:23.063053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.413 [2024-11-04 16:29:23.127849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.413 [2024-11-04 16:29:23.168273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.413 [2024-11-04 16:29:23.168311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.413 [2024-11-04 16:29:23.168317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.413 [2024-11-04 16:29:23.168323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.413 [2024-11-04 16:29:23.168328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.413 [2024-11-04 16:29:23.168937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.672 [2024-11-04 16:29:23.380879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.672 [2024-11-04 16:29:23.412913] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.672 [2024-11-04 16:29:23.413104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2836671 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2836671 /var/tmp/bdevperf.sock 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2836671 ']' 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.239 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:57.239 "subsystems": [ 00:17:57.239 { 00:17:57.239 "subsystem": "keyring", 00:17:57.239 "config": [ 00:17:57.239 { 00:17:57.239 "method": "keyring_file_add_key", 00:17:57.239 "params": { 00:17:57.239 "name": "key0", 00:17:57.239 "path": "/tmp/tmp.0e6TiEhu3u" 00:17:57.239 } 00:17:57.239 } 00:17:57.239 ] 00:17:57.239 }, 00:17:57.239 { 00:17:57.239 "subsystem": "iobuf", 00:17:57.239 "config": [ 00:17:57.239 { 00:17:57.239 "method": "iobuf_set_options", 00:17:57.239 "params": { 00:17:57.239 "small_pool_count": 8192, 00:17:57.239 "large_pool_count": 1024, 00:17:57.239 "small_bufsize": 8192, 00:17:57.239 "large_bufsize": 135168, 00:17:57.239 "enable_numa": false 00:17:57.239 } 00:17:57.239 } 00:17:57.239 ] 00:17:57.239 }, 00:17:57.239 { 00:17:57.239 "subsystem": "sock", 00:17:57.239 "config": [ 00:17:57.239 { 00:17:57.239 "method": "sock_set_default_impl", 00:17:57.239 "params": { 00:17:57.239 "impl_name": "posix" 00:17:57.239 } 00:17:57.239 }, 00:17:57.239 { 00:17:57.239 "method": "sock_impl_set_options", 00:17:57.239 "params": { 00:17:57.239 "impl_name": "ssl", 00:17:57.239 "recv_buf_size": 4096, 00:17:57.239 "send_buf_size": 4096, 00:17:57.239 "enable_recv_pipe": true, 00:17:57.239 "enable_quickack": false, 00:17:57.239 "enable_placement_id": 0, 00:17:57.239 "enable_zerocopy_send_server": true, 00:17:57.240 "enable_zerocopy_send_client": false, 00:17:57.240 "zerocopy_threshold": 0, 00:17:57.240 "tls_version": 0, 00:17:57.240 "enable_ktls": false 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "sock_impl_set_options", 00:17:57.240 "params": { 00:17:57.240 "impl_name": "posix", 00:17:57.240 "recv_buf_size": 2097152, 00:17:57.240 "send_buf_size": 2097152, 00:17:57.240 "enable_recv_pipe": true, 00:17:57.240 "enable_quickack": false, 00:17:57.240 "enable_placement_id": 0, 00:17:57.240 "enable_zerocopy_send_server": true, 00:17:57.240 "enable_zerocopy_send_client": false, 00:17:57.240 "zerocopy_threshold": 0, 00:17:57.240 "tls_version": 0, 00:17:57.240 "enable_ktls": false 00:17:57.240 } 00:17:57.240 } 00:17:57.240 ] 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "subsystem": "vmd", 00:17:57.240 "config": [] 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "subsystem": "accel", 00:17:57.240 "config": [ 00:17:57.240 { 00:17:57.240 "method": "accel_set_options", 00:17:57.240 "params": { 00:17:57.240 "small_cache_size": 128, 00:17:57.240 "large_cache_size": 16, 00:17:57.240 "task_count": 2048, 00:17:57.240 "sequence_count": 2048, 00:17:57.240 "buf_count": 2048 00:17:57.240 } 00:17:57.240 } 00:17:57.240 ] 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "subsystem": "bdev", 00:17:57.240 "config": [ 00:17:57.240 { 00:17:57.240 "method": "bdev_set_options", 00:17:57.240 "params": { 00:17:57.240 "bdev_io_pool_size": 65535, 00:17:57.240 "bdev_io_cache_size": 256, 00:17:57.240 "bdev_auto_examine": true, 00:17:57.240 "iobuf_small_cache_size": 128, 00:17:57.240 "iobuf_large_cache_size": 16 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_raid_set_options", 00:17:57.240 "params": { 00:17:57.240 "process_window_size_kb": 1024, 00:17:57.240 "process_max_bandwidth_mb_sec": 0 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_iscsi_set_options", 00:17:57.240 "params": { 00:17:57.240 "timeout_sec": 30 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_nvme_set_options", 00:17:57.240 "params": { 00:17:57.240 "action_on_timeout": "none", 00:17:57.240 "timeout_us": 0, 00:17:57.240 "timeout_admin_us": 0, 00:17:57.240 "keep_alive_timeout_ms": 10000, 00:17:57.240 "arbitration_burst": 0, 00:17:57.240 "low_priority_weight": 0, 00:17:57.240 "medium_priority_weight": 0, 00:17:57.240 "high_priority_weight": 0, 00:17:57.240 "nvme_adminq_poll_period_us": 10000, 00:17:57.240 "nvme_ioq_poll_period_us": 0, 00:17:57.240 "io_queue_requests": 512, 00:17:57.240 "delay_cmd_submit": true, 00:17:57.240 "transport_retry_count": 4, 00:17:57.240 "bdev_retry_count": 3, 00:17:57.240 "transport_ack_timeout": 0, 00:17:57.240 "ctrlr_loss_timeout_sec": 0, 00:17:57.240 "reconnect_delay_sec": 0, 00:17:57.240 "fast_io_fail_timeout_sec": 0, 00:17:57.240 "disable_auto_failback": false, 00:17:57.240 "generate_uuids": false, 00:17:57.240 "transport_tos": 0, 00:17:57.240 "nvme_error_stat": false, 00:17:57.240 "rdma_srq_size": 0, 00:17:57.240 "io_path_stat": false, 00:17:57.240 "allow_accel_sequence": false, 00:17:57.240 "rdma_max_cq_size": 0, 00:17:57.240 "rdma_cm_event_timeout_ms": 0, 00:17:57.240 "dhchap_digests": [ 00:17:57.240 "sha256", 00:17:57.240 "sha384", 00:17:57.240 "sha512" 00:17:57.240 ], 00:17:57.240 "dhchap_dhgroups": [ 00:17:57.240 "null", 00:17:57.240 "ffdhe2048", 00:17:57.240 "ffdhe3072", 00:17:57.240 "ffdhe4096", 00:17:57.240 "ffdhe6144", 00:17:57.240 "ffdhe8192" 00:17:57.240 ] 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_nvme_attach_controller", 00:17:57.240 "params": { 00:17:57.240 "name": "nvme0", 00:17:57.240 "trtype": "TCP", 00:17:57.240 "adrfam": "IPv4", 00:17:57.240 "traddr": "10.0.0.2", 00:17:57.240 "trsvcid": "4420", 00:17:57.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.240 "prchk_reftag": false, 00:17:57.240 "prchk_guard": false, 00:17:57.240 "ctrlr_loss_timeout_sec": 0, 00:17:57.240 "reconnect_delay_sec": 0, 00:17:57.240 "fast_io_fail_timeout_sec": 0, 00:17:57.240 "psk": "key0", 00:17:57.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.240 "hdgst": false, 00:17:57.240 "ddgst": false, 00:17:57.240 "multipath": "multipath" 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_nvme_set_hotplug", 00:17:57.240 "params": { 00:17:57.240 "period_us": 100000, 00:17:57.240 "enable": false 00:17:57.240 } 00:17:57.240 }, 00:17:57.240 { 00:17:57.240 "method": "bdev_enable_histogram", 00:17:57.240 "params": { 00:17:57.240 "name": "nvme0n1", 00:17:57.240 "enable": true 00:17:57.240 } 00:17:57.241 }, 00:17:57.241 { 00:17:57.241 "method": "bdev_wait_for_examine" 00:17:57.241 } 00:17:57.241 ] 00:17:57.241 }, 00:17:57.241 { 00:17:57.241 "subsystem": "nbd", 00:17:57.241 "config": [] 00:17:57.241 } 00:17:57.241 ] 00:17:57.241 }' 00:17:57.241 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.241 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.241 [2024-11-04 16:29:23.971180] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:17:57.241 [2024-11-04 16:29:23.971227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836671 ] 00:17:57.241 [2024-11-04 16:29:24.033741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.499 [2024-11-04 16:29:24.077192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.499 [2024-11-04 16:29:24.229764] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.064 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.064 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.064 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:58.064 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:58.322 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.322 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.322 Running I/O for 1 seconds... 00:17:59.514 5465.00 IOPS, 21.35 MiB/s 00:17:59.515 Latency(us) 00:17:59.515 [2024-11-04T15:29:26.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.515 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:59.515 Verification LBA range: start 0x0 length 0x2000 00:17:59.515 nvme0n1 : 1.02 5505.96 21.51 0.00 0.00 23078.74 6023.07 20721.86 00:17:59.515 [2024-11-04T15:29:26.339Z] =================================================================================================================== 00:17:59.515 [2024-11-04T15:29:26.339Z] Total : 5505.96 21.51 0.00 0.00 23078.74 6023.07 20721.86 00:17:59.515 { 00:17:59.515 "results": [ 00:17:59.515 { 00:17:59.515 "job": "nvme0n1", 00:17:59.515 "core_mask": "0x2", 00:17:59.515 "workload": "verify", 00:17:59.515 "status": "finished", 00:17:59.515 "verify_range": { 00:17:59.515 "start": 0, 00:17:59.515 "length": 8192 00:17:59.515 }, 00:17:59.515 "queue_depth": 128, 00:17:59.515 "io_size": 4096, 00:17:59.515 "runtime": 1.015809, 00:17:59.515 "iops": 5505.9563362797535, 00:17:59.515 "mibps": 21.507641938592787, 00:17:59.515 "io_failed": 0, 00:17:59.515 "io_timeout": 0, 00:17:59.515 "avg_latency_us": 23078.73733595566, 00:17:59.515 "min_latency_us": 6023.070476190476, 00:17:59.515 "max_latency_us": 20721.859047619047 00:17:59.515 } 00:17:59.515 ], 00:17:59.515 "core_count": 1 00:17:59.515 } 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:59.515 nvmf_trace.0 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2836671 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2836671 ']' 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2836671 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836671 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836671' 00:17:59.515 killing process with pid 2836671 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2836671 00:17:59.515 Received shutdown signal, test time was about 1.000000 seconds 00:17:59.515 00:17:59.515 Latency(us) 00:17:59.515 [2024-11-04T15:29:26.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.515 [2024-11-04T15:29:26.339Z] =================================================================================================================== 00:17:59.515 [2024-11-04T15:29:26.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2836671 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.773 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.773 rmmod nvme_tcp 00:17:59.773 rmmod nvme_fabrics 00:17:59.774 rmmod nvme_keyring 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2836550 ']' 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2836550 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2836550 ']' 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2836550 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836550 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836550' 00:17:59.774 killing process with pid 2836550 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2836550 00:17:59.774 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2836550 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.033 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.935 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:01.935 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Qykfq14PcS /tmp/tmp.UKApxZXqUL /tmp/tmp.0e6TiEhu3u 00:18:02.194 00:18:02.194 real 1m18.217s 00:18:02.194 user 2m0.034s 00:18:02.194 sys 0m29.741s 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.194 ************************************ 00:18:02.194 END TEST nvmf_tls 00:18:02.194 ************************************ 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.194 ************************************ 00:18:02.194 START TEST nvmf_fips 00:18:02.194 ************************************ 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:02.194 * Looking for test storage... 00:18:02.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.194 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.195 --rc genhtml_branch_coverage=1 00:18:02.195 --rc genhtml_function_coverage=1 00:18:02.195 --rc genhtml_legend=1 00:18:02.195 --rc geninfo_all_blocks=1 00:18:02.195 --rc geninfo_unexecuted_blocks=1 00:18:02.195 00:18:02.195 ' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.195 --rc genhtml_branch_coverage=1 00:18:02.195 --rc genhtml_function_coverage=1 00:18:02.195 --rc genhtml_legend=1 00:18:02.195 --rc geninfo_all_blocks=1 00:18:02.195 --rc geninfo_unexecuted_blocks=1 00:18:02.195 00:18:02.195 ' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.195 --rc genhtml_branch_coverage=1 00:18:02.195 --rc genhtml_function_coverage=1 00:18:02.195 --rc genhtml_legend=1 00:18:02.195 --rc geninfo_all_blocks=1 00:18:02.195 --rc geninfo_unexecuted_blocks=1 00:18:02.195 00:18:02.195 ' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.195 --rc genhtml_branch_coverage=1 00:18:02.195 --rc genhtml_function_coverage=1 00:18:02.195 --rc genhtml_legend=1 00:18:02.195 --rc geninfo_all_blocks=1 00:18:02.195 --rc geninfo_unexecuted_blocks=1 00:18:02.195 00:18:02.195 ' 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.195 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:02.195 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:02.454 Error setting digest 00:18:02.454 400200FF597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:02.454 400200FF597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:02.454 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.455 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:09.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:09.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:09.016 Found net devices under 0000:86:00.0: cvl_0_0 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:09.016 Found net devices under 0000:86:00.1: cvl_0_1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.016 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:18:09.017 00:18:09.017 --- 10.0.0.2 ping statistics --- 00:18:09.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.017 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:18:09.017 00:18:09.017 --- 10.0.0.1 ping statistics --- 00:18:09.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.017 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2840596 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2840596 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2840596 ']' 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.017 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 [2024-11-04 16:29:34.941872] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:18:09.017 [2024-11-04 16:29:34.941928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.017 [2024-11-04 16:29:35.011912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.017 [2024-11-04 16:29:35.054652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.017 [2024-11-04 16:29:35.054684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.017 [2024-11-04 16:29:35.054691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.017 [2024-11-04 16:29:35.054697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.017 [2024-11-04 16:29:35.054702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.017 [2024-11-04 16:29:35.055247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7WA 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7WA 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7WA 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7WA 00:18:09.017 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.275 [2024-11-04 16:29:35.964791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.275 [2024-11-04 16:29:35.980807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.275 [2024-11-04 16:29:35.981039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.275 malloc0 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2840848 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2840848 /var/tmp/bdevperf.sock 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2840848 ']' 00:18:09.275 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.276 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.276 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.276 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.276 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:09.276 [2024-11-04 16:29:36.096090] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:18:09.276 [2024-11-04 16:29:36.096141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840848 ] 00:18:09.534 [2024-11-04 16:29:36.154740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.534 [2024-11-04 16:29:36.195660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.534 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.534 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:09.534 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7WA 00:18:09.792 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.051 [2024-11-04 16:29:36.646231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.051 TLSTESTn1 00:18:10.051 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:10.051 Running I/O for 10 seconds... 00:18:12.359 5467.00 IOPS, 21.36 MiB/s [2024-11-04T15:29:40.117Z] 5538.50 IOPS, 21.63 MiB/s [2024-11-04T15:29:41.051Z] 5505.33 IOPS, 21.51 MiB/s [2024-11-04T15:29:41.985Z] 5532.00 IOPS, 21.61 MiB/s [2024-11-04T15:29:42.918Z] 5543.40 IOPS, 21.65 MiB/s [2024-11-04T15:29:43.851Z] 5545.33 IOPS, 21.66 MiB/s [2024-11-04T15:29:45.222Z] 5551.00 IOPS, 21.68 MiB/s [2024-11-04T15:29:46.156Z] 5522.25 IOPS, 21.57 MiB/s [2024-11-04T15:29:47.091Z] 5541.44 IOPS, 21.65 MiB/s [2024-11-04T15:29:47.091Z] 5547.40 IOPS, 21.67 MiB/s 00:18:20.267 Latency(us) 00:18:20.267 [2024-11-04T15:29:47.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.267 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:20.267 Verification LBA range: start 0x0 length 0x2000 00:18:20.267 TLSTESTn1 : 10.02 5549.82 21.68 0.00 0.00 23026.75 6896.88 24341.94 00:18:20.267 [2024-11-04T15:29:47.091Z] =================================================================================================================== 00:18:20.267 [2024-11-04T15:29:47.091Z] Total : 5549.82 21.68 0.00 0.00 23026.75 6896.88 24341.94 00:18:20.267 { 00:18:20.267 "results": [ 00:18:20.267 { 00:18:20.267 "job": "TLSTESTn1", 00:18:20.267 "core_mask": "0x4", 00:18:20.267 "workload": "verify", 00:18:20.267 "status": "finished", 00:18:20.267 "verify_range": { 00:18:20.267 "start": 0, 00:18:20.267 "length": 8192 00:18:20.267 }, 00:18:20.267 "queue_depth": 128, 00:18:20.267 "io_size": 4096, 00:18:20.267 "runtime": 10.018519, 00:18:20.267 "iops": 5549.822284112053, 00:18:20.267 "mibps": 21.678993297312708, 00:18:20.267 "io_failed": 0, 00:18:20.267 "io_timeout": 0, 00:18:20.267 "avg_latency_us": 23026.753680929, 00:18:20.267 "min_latency_us": 6896.88380952381, 00:18:20.267 "max_latency_us": 24341.942857142858 00:18:20.267 } 00:18:20.267 ], 00:18:20.267 "core_count": 1 00:18:20.267 } 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:20.267 nvmf_trace.0 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2840848 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2840848 ']' 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2840848 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.267 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840848 00:18:20.267 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.267 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.267 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840848' 00:18:20.267 killing process with pid 2840848 00:18:20.268 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2840848 00:18:20.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.268 00:18:20.268 Latency(us) 00:18:20.268 [2024-11-04T15:29:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.268 [2024-11-04T15:29:47.092Z] =================================================================================================================== 00:18:20.268 [2024-11-04T15:29:47.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.268 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2840848 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.526 rmmod nvme_tcp 00:18:20.526 rmmod nvme_fabrics 00:18:20.526 rmmod nvme_keyring 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2840596 ']' 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2840596 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2840596 ']' 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2840596 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840596 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840596' 00:18:20.526 killing process with pid 2840596 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2840596 00:18:20.526 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2840596 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.784 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7WA 00:18:23.317 00:18:23.317 real 0m20.704s 00:18:23.317 user 0m21.793s 00:18:23.317 sys 0m9.430s 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:23.317 ************************************ 00:18:23.317 END TEST nvmf_fips 00:18:23.317 ************************************ 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.317 ************************************ 00:18:23.317 START TEST nvmf_control_msg_list 00:18:23.317 ************************************ 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:23.317 * Looking for test storage... 00:18:23.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.317 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:23.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.317 --rc genhtml_branch_coverage=1 00:18:23.317 --rc genhtml_function_coverage=1 00:18:23.317 --rc genhtml_legend=1 00:18:23.317 --rc geninfo_all_blocks=1 00:18:23.318 --rc geninfo_unexecuted_blocks=1 00:18:23.318 00:18:23.318 ' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.318 --rc genhtml_branch_coverage=1 00:18:23.318 --rc genhtml_function_coverage=1 00:18:23.318 --rc genhtml_legend=1 00:18:23.318 --rc geninfo_all_blocks=1 00:18:23.318 --rc geninfo_unexecuted_blocks=1 00:18:23.318 00:18:23.318 ' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.318 --rc genhtml_branch_coverage=1 00:18:23.318 --rc genhtml_function_coverage=1 00:18:23.318 --rc genhtml_legend=1 00:18:23.318 --rc geninfo_all_blocks=1 00:18:23.318 --rc geninfo_unexecuted_blocks=1 00:18:23.318 00:18:23.318 ' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.318 --rc genhtml_branch_coverage=1 00:18:23.318 --rc genhtml_function_coverage=1 00:18:23.318 --rc genhtml_legend=1 00:18:23.318 --rc geninfo_all_blocks=1 00:18:23.318 --rc geninfo_unexecuted_blocks=1 00:18:23.318 00:18:23.318 ' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:18:23.318 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:28.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:28.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:28.583 Found net devices under 0000:86:00.0: cvl_0_0 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:28.583 Found net devices under 0000:86:00.1: cvl_0_1 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.583 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:18:28.583 00:18:28.583 --- 10.0.0.2 ping statistics --- 00:18:28.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.583 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:18:28.583 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:18:28.583 00:18:28.583 --- 10.0.0.1 ping statistics --- 00:18:28.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.583 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2846204 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2846204 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2846204 ']' 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.584 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.584 [2024-11-04 16:29:55.275277] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:18:28.584 [2024-11-04 16:29:55.275319] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.584 [2024-11-04 16:29:55.339969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.584 [2024-11-04 16:29:55.381483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.584 [2024-11-04 16:29:55.381517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.584 [2024-11-04 16:29:55.381524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.584 [2024-11-04 16:29:55.381531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.584 [2024-11-04 16:29:55.381535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.584 [2024-11-04 16:29:55.382070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 [2024-11-04 16:29:55.507987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.841 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.842 Malloc0 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:28.842 [2024-11-04 16:29:55.548105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2846228 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2846229 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2846230 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2846228 00:18:28.842 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:28.842 [2024-11-04 16:29:55.622517] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:28.842 [2024-11-04 16:29:55.632812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:28.842 [2024-11-04 16:29:55.632973] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:30.215 Initializing NVMe Controllers 00:18:30.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:30.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:30.215 Initialization complete. Launching workers. 00:18:30.215 ======================================================== 00:18:30.215 Latency(us) 00:18:30.215 Device Information : IOPS MiB/s Average min max 00:18:30.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5562.00 21.73 179.43 148.80 41173.23 00:18:30.215 ======================================================== 00:18:30.215 Total : 5562.00 21.73 179.43 148.80 41173.23 00:18:30.215 00:18:30.215 Initializing NVMe Controllers 00:18:30.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:30.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:30.215 Initialization complete. Launching workers. 00:18:30.215 ======================================================== 00:18:30.215 Latency(us) 00:18:30.215 Device Information : IOPS MiB/s Average min max 00:18:30.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6011.00 23.48 166.00 135.21 401.69 00:18:30.215 ======================================================== 00:18:30.215 Total : 6011.00 23.48 166.00 135.21 401.69 00:18:30.215 00:18:30.215 Initializing NVMe Controllers 00:18:30.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:30.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:30.215 Initialization complete. Launching workers. 00:18:30.215 ======================================================== 00:18:30.215 Latency(us) 00:18:30.215 Device Information : IOPS MiB/s Average min max 00:18:30.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40984.57 40717.82 41942.98 00:18:30.215 ======================================================== 00:18:30.215 Total : 25.00 0.10 40984.57 40717.82 41942.98 00:18:30.215 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2846229 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2846230 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.215 rmmod nvme_tcp 00:18:30.215 rmmod nvme_fabrics 00:18:30.215 rmmod nvme_keyring 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2846204 ']' 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2846204 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2846204 ']' 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2846204 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846204 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846204' 00:18:30.215 killing process with pid 2846204 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2846204 00:18:30.215 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2846204 00:18:30.215 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.215 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.215 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.215 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.474 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:32.459 00:18:32.459 real 0m9.491s 00:18:32.459 user 0m6.186s 00:18:32.459 sys 0m5.056s 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.459 ************************************ 00:18:32.459 END TEST nvmf_control_msg_list 00:18:32.459 ************************************ 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.459 ************************************ 00:18:32.459 START TEST nvmf_wait_for_buf 00:18:32.459 ************************************ 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:32.459 * Looking for test storage... 00:18:32.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:32.459 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:32.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.756 --rc genhtml_branch_coverage=1 00:18:32.756 --rc genhtml_function_coverage=1 00:18:32.756 --rc genhtml_legend=1 00:18:32.756 --rc geninfo_all_blocks=1 00:18:32.756 --rc geninfo_unexecuted_blocks=1 00:18:32.756 00:18:32.756 ' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:32.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.756 --rc genhtml_branch_coverage=1 00:18:32.756 --rc genhtml_function_coverage=1 00:18:32.756 --rc genhtml_legend=1 00:18:32.756 --rc geninfo_all_blocks=1 00:18:32.756 --rc geninfo_unexecuted_blocks=1 00:18:32.756 00:18:32.756 ' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:32.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.756 --rc genhtml_branch_coverage=1 00:18:32.756 --rc genhtml_function_coverage=1 00:18:32.756 --rc genhtml_legend=1 00:18:32.756 --rc geninfo_all_blocks=1 00:18:32.756 --rc geninfo_unexecuted_blocks=1 00:18:32.756 00:18:32.756 ' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:32.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.756 --rc genhtml_branch_coverage=1 00:18:32.756 --rc genhtml_function_coverage=1 00:18:32.756 --rc genhtml_legend=1 00:18:32.756 --rc geninfo_all_blocks=1 00:18:32.756 --rc geninfo_unexecuted_blocks=1 00:18:32.756 00:18:32.756 ' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.756 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:32.757 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:38.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:38.021 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:38.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:38.022 Found net devices under 0000:86:00.0: cvl_0_0 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:38.022 Found net devices under 0000:86:00.1: cvl_0_1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:38.022 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.279 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.279 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.279 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:38.279 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:38.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:18:38.280 00:18:38.280 --- 10.0.0.2 ping statistics --- 00:18:38.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.280 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:18:38.280 00:18:38.280 --- 10.0.0.1 ping statistics --- 00:18:38.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.280 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2850045 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2850045 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2850045 ']' 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.280 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.280 [2024-11-04 16:30:04.974829] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:18:38.280 [2024-11-04 16:30:04.974871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.280 [2024-11-04 16:30:05.041732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.280 [2024-11-04 16:30:05.082743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.280 [2024-11-04 16:30:05.082780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.280 [2024-11-04 16:30:05.082787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.280 [2024-11-04 16:30:05.082792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.280 [2024-11-04 16:30:05.082798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.280 [2024-11-04 16:30:05.083361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 Malloc0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 [2024-11-04 16:30:05.263211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:38.538 [2024-11-04 16:30:05.287379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.538 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:38.796 [2024-11-04 16:30:05.369674] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:40.170 Initializing NVMe Controllers 00:18:40.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:40.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:40.170 Initialization complete. Launching workers. 00:18:40.170 ======================================================== 00:18:40.170 Latency(us) 00:18:40.170 Device Information : IOPS MiB/s Average min max 00:18:40.170 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.00 7.88 66013.96 7277.02 191529.64 00:18:40.170 ======================================================== 00:18:40.170 Total : 63.00 7.88 66013.96 7277.02 191529.64 00:18:40.170 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=982 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 982 -eq 0 ]] 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.170 rmmod nvme_tcp 00:18:40.170 rmmod nvme_fabrics 00:18:40.170 rmmod nvme_keyring 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2850045 ']' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2850045 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2850045 ']' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2850045 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850045 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850045' 00:18:40.170 killing process with pid 2850045 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2850045 00:18:40.170 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2850045 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.429 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.329 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:42.587 00:18:42.588 real 0m9.975s 00:18:42.588 user 0m3.861s 00:18:42.588 sys 0m4.548s 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:42.588 ************************************ 00:18:42.588 END TEST nvmf_wait_for_buf 00:18:42.588 ************************************ 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:18:42.588 16:30:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:47.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:47.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:47.853 Found net devices under 0000:86:00.0: cvl_0_0 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:47.853 Found net devices under 0000:86:00.1: cvl_0_1 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.853 ************************************ 00:18:47.853 START TEST nvmf_perf_adq 00:18:47.853 ************************************ 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:47.853 * Looking for test storage... 00:18:47.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.853 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.854 --rc genhtml_branch_coverage=1 00:18:47.854 --rc genhtml_function_coverage=1 00:18:47.854 --rc genhtml_legend=1 00:18:47.854 --rc geninfo_all_blocks=1 00:18:47.854 --rc geninfo_unexecuted_blocks=1 00:18:47.854 00:18:47.854 ' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.854 --rc genhtml_branch_coverage=1 00:18:47.854 --rc genhtml_function_coverage=1 00:18:47.854 --rc genhtml_legend=1 00:18:47.854 --rc geninfo_all_blocks=1 00:18:47.854 --rc geninfo_unexecuted_blocks=1 00:18:47.854 00:18:47.854 ' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.854 --rc genhtml_branch_coverage=1 00:18:47.854 --rc genhtml_function_coverage=1 00:18:47.854 --rc genhtml_legend=1 00:18:47.854 --rc geninfo_all_blocks=1 00:18:47.854 --rc geninfo_unexecuted_blocks=1 00:18:47.854 00:18:47.854 ' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.854 --rc genhtml_branch_coverage=1 00:18:47.854 --rc genhtml_function_coverage=1 00:18:47.854 --rc genhtml_legend=1 00:18:47.854 --rc geninfo_all_blocks=1 00:18:47.854 --rc geninfo_unexecuted_blocks=1 00:18:47.854 00:18:47.854 ' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.854 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.116 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:53.117 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:53.117 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:53.117 Found net devices under 0000:86:00.0: cvl_0_0 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:53.117 Found net devices under 0000:86:00.1: cvl_0_1 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:18:53.117 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:18:54.492 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:18:56.392 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:01.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:01.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.661 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:01.661 Found net devices under 0000:86:00.0: cvl_0_0 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:01.662 Found net devices under 0000:86:00.1: cvl_0_1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:19:01.662 00:19:01.662 --- 10.0.0.2 ping statistics --- 00:19:01.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.662 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:19:01.662 00:19:01.662 --- 10.0.0.1 ping statistics --- 00:19:01.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.662 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2858626 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2858626 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2858626 ']' 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.662 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.662 [2024-11-04 16:30:28.378996] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:01.662 [2024-11-04 16:30:28.379045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.662 [2024-11-04 16:30:28.440841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.921 [2024-11-04 16:30:28.485137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.921 [2024-11-04 16:30:28.485171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.921 [2024-11-04 16:30:28.485179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.921 [2024-11-04 16:30:28.485185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.921 [2024-11-04 16:30:28.485190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.921 [2024-11-04 16:30:28.486675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.921 [2024-11-04 16:30:28.486778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.921 [2024-11-04 16:30:28.489617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.921 [2024-11-04 16:30:28.489621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.921 [2024-11-04 16:30:28.725549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.921 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.179 Malloc1 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.179 [2024-11-04 16:30:28.786456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2858667 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:02.179 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:04.091 "tick_rate": 2100000000, 00:19:04.091 "poll_groups": [ 00:19:04.091 { 00:19:04.091 "name": "nvmf_tgt_poll_group_000", 00:19:04.091 "admin_qpairs": 1, 00:19:04.091 "io_qpairs": 1, 00:19:04.091 "current_admin_qpairs": 1, 00:19:04.091 "current_io_qpairs": 1, 00:19:04.091 "pending_bdev_io": 0, 00:19:04.091 "completed_nvme_io": 20500, 00:19:04.091 "transports": [ 00:19:04.091 { 00:19:04.091 "trtype": "TCP" 00:19:04.091 } 00:19:04.091 ] 00:19:04.091 }, 00:19:04.091 { 00:19:04.091 "name": "nvmf_tgt_poll_group_001", 00:19:04.091 "admin_qpairs": 0, 00:19:04.091 "io_qpairs": 1, 00:19:04.091 "current_admin_qpairs": 0, 00:19:04.091 "current_io_qpairs": 1, 00:19:04.091 "pending_bdev_io": 0, 00:19:04.091 "completed_nvme_io": 20781, 00:19:04.091 "transports": [ 00:19:04.091 { 00:19:04.091 "trtype": "TCP" 00:19:04.091 } 00:19:04.091 ] 00:19:04.091 }, 00:19:04.091 { 00:19:04.091 "name": "nvmf_tgt_poll_group_002", 00:19:04.091 "admin_qpairs": 0, 00:19:04.091 "io_qpairs": 1, 00:19:04.091 "current_admin_qpairs": 0, 00:19:04.091 "current_io_qpairs": 1, 00:19:04.091 "pending_bdev_io": 0, 00:19:04.091 "completed_nvme_io": 20327, 00:19:04.091 "transports": [ 00:19:04.091 { 00:19:04.091 "trtype": "TCP" 00:19:04.091 } 00:19:04.091 ] 00:19:04.091 }, 00:19:04.091 { 00:19:04.091 "name": "nvmf_tgt_poll_group_003", 00:19:04.091 "admin_qpairs": 0, 00:19:04.091 "io_qpairs": 1, 00:19:04.091 "current_admin_qpairs": 0, 00:19:04.091 "current_io_qpairs": 1, 00:19:04.091 "pending_bdev_io": 0, 00:19:04.091 "completed_nvme_io": 20261, 00:19:04.091 "transports": [ 00:19:04.091 { 00:19:04.091 "trtype": "TCP" 00:19:04.091 } 00:19:04.091 ] 00:19:04.091 } 00:19:04.091 ] 00:19:04.091 }' 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:04.091 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2858667 00:19:12.206 Initializing NVMe Controllers 00:19:12.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:12.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:12.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:12.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:12.206 Initialization complete. Launching workers. 00:19:12.206 ======================================================== 00:19:12.206 Latency(us) 00:19:12.206 Device Information : IOPS MiB/s Average min max 00:19:12.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10768.20 42.06 5943.94 1654.88 10320.55 00:19:12.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11081.40 43.29 5776.16 2245.50 13500.00 00:19:12.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10889.20 42.54 5882.73 2482.63 42943.47 00:19:12.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10937.00 42.72 5851.62 1904.30 9687.82 00:19:12.206 ======================================================== 00:19:12.206 Total : 43675.79 170.61 5862.99 1654.88 42943.47 00:19:12.206 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.206 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.206 rmmod nvme_tcp 00:19:12.206 rmmod nvme_fabrics 00:19:12.465 rmmod nvme_keyring 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2858626 ']' 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2858626 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2858626 ']' 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2858626 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858626 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858626' 00:19:12.465 killing process with pid 2858626 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2858626 00:19:12.465 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2858626 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.724 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.640 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.640 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:14.640 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:14.640 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:16.018 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:17.922 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:23.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:23.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:23.196 Found net devices under 0000:86:00.0: cvl_0_0 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:23.196 Found net devices under 0000:86:00.1: cvl_0_1 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.196 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:23.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:19:23.197 00:19:23.197 --- 10.0.0.2 ping statistics --- 00:19:23.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.197 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:23.197 00:19:23.197 --- 10.0.0.1 ping statistics --- 00:19:23.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.197 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:23.197 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:23.197 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:23.197 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:23.197 net.core.busy_poll = 1 00:19:23.197 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:23.197 net.core.busy_read = 1 00:19:23.197 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:23.197 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2862558 00:19:23.455 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2862558 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2862558 ']' 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.456 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.714 [2024-11-04 16:30:50.281716] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:23.714 [2024-11-04 16:30:50.281762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.714 [2024-11-04 16:30:50.354054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.714 [2024-11-04 16:30:50.397393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.714 [2024-11-04 16:30:50.397429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.714 [2024-11-04 16:30:50.397438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.714 [2024-11-04 16:30:50.397445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.714 [2024-11-04 16:30:50.397450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.714 [2024-11-04 16:30:50.398920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.714 [2024-11-04 16:30:50.399020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.714 [2024-11-04 16:30:50.399038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.714 [2024-11-04 16:30:50.399044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.714 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.715 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 [2024-11-04 16:30:50.616023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 Malloc1 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.973 [2024-11-04 16:30:50.676035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2862685 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:23.973 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:25.873 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:25.873 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.873 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:26.132 "tick_rate": 2100000000, 00:19:26.132 "poll_groups": [ 00:19:26.132 { 00:19:26.132 "name": "nvmf_tgt_poll_group_000", 00:19:26.132 "admin_qpairs": 1, 00:19:26.132 "io_qpairs": 1, 00:19:26.132 "current_admin_qpairs": 1, 00:19:26.132 "current_io_qpairs": 1, 00:19:26.132 "pending_bdev_io": 0, 00:19:26.132 "completed_nvme_io": 29056, 00:19:26.132 "transports": [ 00:19:26.132 { 00:19:26.132 "trtype": "TCP" 00:19:26.132 } 00:19:26.132 ] 00:19:26.132 }, 00:19:26.132 { 00:19:26.132 "name": "nvmf_tgt_poll_group_001", 00:19:26.132 "admin_qpairs": 0, 00:19:26.132 "io_qpairs": 3, 00:19:26.132 "current_admin_qpairs": 0, 00:19:26.132 "current_io_qpairs": 3, 00:19:26.132 "pending_bdev_io": 0, 00:19:26.132 "completed_nvme_io": 31226, 00:19:26.132 "transports": [ 00:19:26.132 { 00:19:26.132 "trtype": "TCP" 00:19:26.132 } 00:19:26.132 ] 00:19:26.132 }, 00:19:26.132 { 00:19:26.132 "name": "nvmf_tgt_poll_group_002", 00:19:26.132 "admin_qpairs": 0, 00:19:26.132 "io_qpairs": 0, 00:19:26.132 "current_admin_qpairs": 0, 00:19:26.132 "current_io_qpairs": 0, 00:19:26.132 "pending_bdev_io": 0, 00:19:26.132 "completed_nvme_io": 0, 00:19:26.132 "transports": [ 00:19:26.132 { 00:19:26.132 "trtype": "TCP" 00:19:26.132 } 00:19:26.132 ] 00:19:26.132 }, 00:19:26.132 { 00:19:26.132 "name": "nvmf_tgt_poll_group_003", 00:19:26.132 "admin_qpairs": 0, 00:19:26.132 "io_qpairs": 0, 00:19:26.132 "current_admin_qpairs": 0, 00:19:26.132 "current_io_qpairs": 0, 00:19:26.132 "pending_bdev_io": 0, 00:19:26.132 "completed_nvme_io": 0, 00:19:26.132 "transports": [ 00:19:26.132 { 00:19:26.132 "trtype": "TCP" 00:19:26.132 } 00:19:26.132 ] 00:19:26.132 } 00:19:26.132 ] 00:19:26.132 }' 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:26.132 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2862685 00:19:34.242 Initializing NVMe Controllers 00:19:34.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:34.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:34.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:34.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:34.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:34.242 Initialization complete. Launching workers. 00:19:34.242 ======================================================== 00:19:34.242 Latency(us) 00:19:34.242 Device Information : IOPS MiB/s Average min max 00:19:34.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5347.80 20.89 11968.22 1240.91 60563.47 00:19:34.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5657.30 22.10 11314.83 1565.67 60754.10 00:19:34.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15535.10 60.68 4119.20 1380.94 44794.34 00:19:34.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4994.80 19.51 12826.60 1520.37 59190.14 00:19:34.242 ======================================================== 00:19:34.242 Total : 31534.99 123.18 8120.30 1240.91 60754.10 00:19:34.242 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.242 rmmod nvme_tcp 00:19:34.242 rmmod nvme_fabrics 00:19:34.242 rmmod nvme_keyring 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2862558 ']' 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2862558 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2862558 ']' 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2862558 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862558 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862558' 00:19:34.242 killing process with pid 2862558 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2862558 00:19:34.242 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2862558 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.501 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:19:37.792 00:19:37.792 real 0m49.837s 00:19:37.792 user 2m43.961s 00:19:37.792 sys 0m10.055s 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.792 ************************************ 00:19:37.792 END TEST nvmf_perf_adq 00:19:37.792 ************************************ 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.792 ************************************ 00:19:37.792 START TEST nvmf_shutdown 00:19:37.792 ************************************ 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:37.792 * Looking for test storage... 00:19:37.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.792 --rc genhtml_branch_coverage=1 00:19:37.792 --rc genhtml_function_coverage=1 00:19:37.792 --rc genhtml_legend=1 00:19:37.792 --rc geninfo_all_blocks=1 00:19:37.792 --rc geninfo_unexecuted_blocks=1 00:19:37.792 00:19:37.792 ' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.792 --rc genhtml_branch_coverage=1 00:19:37.792 --rc genhtml_function_coverage=1 00:19:37.792 --rc genhtml_legend=1 00:19:37.792 --rc geninfo_all_blocks=1 00:19:37.792 --rc geninfo_unexecuted_blocks=1 00:19:37.792 00:19:37.792 ' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.792 --rc genhtml_branch_coverage=1 00:19:37.792 --rc genhtml_function_coverage=1 00:19:37.792 --rc genhtml_legend=1 00:19:37.792 --rc geninfo_all_blocks=1 00:19:37.792 --rc geninfo_unexecuted_blocks=1 00:19:37.792 00:19:37.792 ' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.792 --rc genhtml_branch_coverage=1 00:19:37.792 --rc genhtml_function_coverage=1 00:19:37.792 --rc genhtml_legend=1 00:19:37.792 --rc geninfo_all_blocks=1 00:19:37.792 --rc geninfo_unexecuted_blocks=1 00:19:37.792 00:19:37.792 ' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.792 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:37.793 ************************************ 00:19:37.793 START TEST nvmf_shutdown_tc1 00:19:37.793 ************************************ 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:37.793 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:44.426 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.427 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.427 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.427 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.427 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.427 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:44.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:19:44.427 00:19:44.427 --- 10.0.0.2 ping statistics --- 00:19:44.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.427 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:19:44.427 00:19:44.427 --- 10.0.0.1 ping statistics --- 00:19:44.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.427 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.427 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2868118 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2868118 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2868118 ']' 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:44.428 [2024-11-04 16:31:10.267612] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:44.428 [2024-11-04 16:31:10.267655] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.428 [2024-11-04 16:31:10.335128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.428 [2024-11-04 16:31:10.378322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.428 [2024-11-04 16:31:10.378358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.428 [2024-11-04 16:31:10.378365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.428 [2024-11-04 16:31:10.378371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.428 [2024-11-04 16:31:10.378376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.428 [2024-11-04 16:31:10.380021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.428 [2024-11-04 16:31:10.380109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.428 [2024-11-04 16:31:10.380217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.428 [2024-11-04 16:31:10.380217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 [2024-11-04 16:31:10.516531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.428 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 Malloc1 00:19:44.428 [2024-11-04 16:31:10.623135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.428 Malloc2 00:19:44.428 Malloc3 00:19:44.428 Malloc4 00:19:44.428 Malloc5 00:19:44.428 Malloc6 00:19:44.428 Malloc7 00:19:44.428 Malloc8 00:19:44.428 Malloc9 00:19:44.428 Malloc10 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2868191 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2868191 /var/tmp/bdevperf.sock 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2868191 ']' 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.428 { 00:19:44.428 "params": { 00:19:44.428 "name": "Nvme$subsystem", 00:19:44.428 "trtype": "$TEST_TRANSPORT", 00:19:44.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 [2024-11-04 16:31:11.090632] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:44.429 [2024-11-04 16:31:11.090682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.429 { 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme$subsystem", 00:19:44.429 "trtype": "$TEST_TRANSPORT", 00:19:44.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "$NVMF_PORT", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.429 "hdgst": ${hdgst:-false}, 00:19:44.429 "ddgst": ${ddgst:-false} 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 } 00:19:44.429 EOF 00:19:44.429 )") 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:44.429 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme1", 00:19:44.429 "trtype": "tcp", 00:19:44.429 "traddr": "10.0.0.2", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "4420", 00:19:44.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.429 "hdgst": false, 00:19:44.429 "ddgst": false 00:19:44.429 }, 00:19:44.429 "method": "bdev_nvme_attach_controller" 00:19:44.429 },{ 00:19:44.429 "params": { 00:19:44.429 "name": "Nvme2", 00:19:44.429 "trtype": "tcp", 00:19:44.429 "traddr": "10.0.0.2", 00:19:44.429 "adrfam": "ipv4", 00:19:44.429 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme3", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme4", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme5", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme6", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme7", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme8", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme9", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 },{ 00:19:44.430 "params": { 00:19:44.430 "name": "Nvme10", 00:19:44.430 "trtype": "tcp", 00:19:44.430 "traddr": "10.0.0.2", 00:19:44.430 "adrfam": "ipv4", 00:19:44.430 "trsvcid": "4420", 00:19:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:44.430 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:44.430 "hdgst": false, 00:19:44.430 "ddgst": false 00:19:44.430 }, 00:19:44.430 "method": "bdev_nvme_attach_controller" 00:19:44.430 }' 00:19:44.430 [2024-11-04 16:31:11.156787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.430 [2024-11-04 16:31:11.198247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2868191 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:46.378 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:47.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2868191 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2868118 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.313 "ddgst": ${ddgst:-false} 00:19:47.313 }, 00:19:47.313 "method": "bdev_nvme_attach_controller" 00:19:47.313 } 00:19:47.313 EOF 00:19:47.313 )") 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.313 "ddgst": ${ddgst:-false} 00:19:47.313 }, 00:19:47.313 "method": "bdev_nvme_attach_controller" 00:19:47.313 } 00:19:47.313 EOF 00:19:47.313 )") 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.313 "ddgst": ${ddgst:-false} 00:19:47.313 }, 00:19:47.313 "method": "bdev_nvme_attach_controller" 00:19:47.313 } 00:19:47.313 EOF 00:19:47.313 )") 00:19:47.313 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.313 "ddgst": ${ddgst:-false} 00:19:47.313 }, 00:19:47.313 "method": "bdev_nvme_attach_controller" 00:19:47.313 } 00:19:47.313 EOF 00:19:47.313 )") 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.313 "ddgst": ${ddgst:-false} 00:19:47.313 }, 00:19:47.313 "method": "bdev_nvme_attach_controller" 00:19:47.313 } 00:19:47.313 EOF 00:19:47.313 )") 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.313 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.313 { 00:19:47.313 "params": { 00:19:47.313 "name": "Nvme$subsystem", 00:19:47.313 "trtype": "$TEST_TRANSPORT", 00:19:47.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.313 "adrfam": "ipv4", 00:19:47.313 "trsvcid": "$NVMF_PORT", 00:19:47.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.313 "hdgst": ${hdgst:-false}, 00:19:47.314 "ddgst": ${ddgst:-false} 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 } 00:19:47.314 EOF 00:19:47.314 )") 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.314 { 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme$subsystem", 00:19:47.314 "trtype": "$TEST_TRANSPORT", 00:19:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "$NVMF_PORT", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.314 "hdgst": ${hdgst:-false}, 00:19:47.314 "ddgst": ${ddgst:-false} 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 } 00:19:47.314 EOF 00:19:47.314 )") 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.314 [2024-11-04 16:31:14.027353] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:47.314 [2024-11-04 16:31:14.027403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868686 ] 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.314 { 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme$subsystem", 00:19:47.314 "trtype": "$TEST_TRANSPORT", 00:19:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "$NVMF_PORT", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.314 "hdgst": ${hdgst:-false}, 00:19:47.314 "ddgst": ${ddgst:-false} 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 } 00:19:47.314 EOF 00:19:47.314 )") 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.314 { 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme$subsystem", 00:19:47.314 "trtype": "$TEST_TRANSPORT", 00:19:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "$NVMF_PORT", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.314 "hdgst": ${hdgst:-false}, 00:19:47.314 "ddgst": ${ddgst:-false} 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 } 00:19:47.314 EOF 00:19:47.314 )") 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:47.314 { 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme$subsystem", 00:19:47.314 "trtype": "$TEST_TRANSPORT", 00:19:47.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "$NVMF_PORT", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.314 "hdgst": ${hdgst:-false}, 00:19:47.314 "ddgst": ${ddgst:-false} 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 } 00:19:47.314 EOF 00:19:47.314 )") 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:47.314 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme1", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme2", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme3", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme4", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme5", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme6", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme7", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme8", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme9", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 },{ 00:19:47.314 "params": { 00:19:47.314 "name": "Nvme10", 00:19:47.314 "trtype": "tcp", 00:19:47.314 "traddr": "10.0.0.2", 00:19:47.314 "adrfam": "ipv4", 00:19:47.314 "trsvcid": "4420", 00:19:47.314 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:47.314 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:47.314 "hdgst": false, 00:19:47.314 "ddgst": false 00:19:47.314 }, 00:19:47.314 "method": "bdev_nvme_attach_controller" 00:19:47.314 }' 00:19:47.314 [2024-11-04 16:31:14.094008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.314 [2024-11-04 16:31:14.135867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.690 Running I/O for 1 seconds... 00:19:49.883 2246.00 IOPS, 140.38 MiB/s 00:19:49.883 Latency(us) 00:19:49.883 [2024-11-04T15:31:16.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.883 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme1n1 : 1.14 281.37 17.59 0.00 0.00 225468.22 26339.23 214708.42 00:19:49.883 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme2n1 : 1.09 234.52 14.66 0.00 0.00 263486.17 16477.62 225693.50 00:19:49.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme3n1 : 1.13 282.96 17.68 0.00 0.00 217909.35 20971.52 202724.69 00:19:49.883 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme4n1 : 1.14 280.86 17.55 0.00 0.00 216595.41 13356.86 218702.99 00:19:49.883 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme5n1 : 1.15 282.88 17.68 0.00 0.00 211868.04 3151.97 210713.84 00:19:49.883 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme6n1 : 1.15 278.55 17.41 0.00 0.00 212300.56 18474.91 229688.08 00:19:49.883 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme7n1 : 1.12 288.28 18.02 0.00 0.00 198024.29 12170.97 216705.71 00:19:49.883 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme8n1 : 1.15 277.92 17.37 0.00 0.00 206596.39 14417.92 226692.14 00:19:49.883 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme9n1 : 1.16 281.19 17.57 0.00 0.00 201098.66 2044.10 233682.65 00:19:49.883 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.883 Verification LBA range: start 0x0 length 0x400 00:19:49.883 Nvme10n1 : 1.16 276.18 17.26 0.00 0.00 201972.88 14605.17 224694.86 00:19:49.883 [2024-11-04T15:31:16.707Z] =================================================================================================================== 00:19:49.883 [2024-11-04T15:31:16.708Z] Total : 2764.72 172.79 0.00 0.00 214507.57 2044.10 233682.65 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.884 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.142 rmmod nvme_tcp 00:19:50.142 rmmod nvme_fabrics 00:19:50.142 rmmod nvme_keyring 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2868118 ']' 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2868118 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2868118 ']' 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2868118 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868118 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868118' 00:19:50.142 killing process with pid 2868118 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2868118 00:19:50.142 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2868118 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.401 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:52.936 00:19:52.936 real 0m14.740s 00:19:52.936 user 0m32.760s 00:19:52.936 sys 0m5.560s 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.936 ************************************ 00:19:52.936 END TEST nvmf_shutdown_tc1 00:19:52.936 ************************************ 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.936 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:52.936 ************************************ 00:19:52.936 START TEST nvmf_shutdown_tc2 00:19:52.936 ************************************ 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.937 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.937 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.937 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:19:52.938 00:19:52.938 --- 10.0.0.2 ping statistics --- 00:19:52.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.938 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:52.938 00:19:52.938 --- 10.0.0.1 ping statistics --- 00:19:52.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.938 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2869708 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2869708 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2869708 ']' 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.938 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.938 [2024-11-04 16:31:19.699530] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:52.938 [2024-11-04 16:31:19.699576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.197 [2024-11-04 16:31:19.769621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.197 [2024-11-04 16:31:19.809806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.197 [2024-11-04 16:31:19.809844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.197 [2024-11-04 16:31:19.809851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.197 [2024-11-04 16:31:19.809857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.197 [2024-11-04 16:31:19.809862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.197 [2024-11-04 16:31:19.811399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.197 [2024-11-04 16:31:19.811466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.197 [2024-11-04 16:31:19.812074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.197 [2024-11-04 16:31:19.812075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.197 [2024-11-04 16:31:19.955902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.197 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.455 Malloc1 00:19:53.455 [2024-11-04 16:31:20.069806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.455 Malloc2 00:19:53.455 Malloc3 00:19:53.455 Malloc4 00:19:53.455 Malloc5 00:19:53.455 Malloc6 00:19:53.713 Malloc7 00:19:53.713 Malloc8 00:19:53.713 Malloc9 00:19:53.713 Malloc10 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2869979 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2869979 /var/tmp/bdevperf.sock 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2869979 ']' 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.713 { 00:19:53.713 "params": { 00:19:53.713 "name": "Nvme$subsystem", 00:19:53.713 "trtype": "$TEST_TRANSPORT", 00:19:53.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.713 "adrfam": "ipv4", 00:19:53.713 "trsvcid": "$NVMF_PORT", 00:19:53.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.713 "hdgst": ${hdgst:-false}, 00:19:53.713 "ddgst": ${ddgst:-false} 00:19:53.713 }, 00:19:53.713 "method": "bdev_nvme_attach_controller" 00:19:53.713 } 00:19:53.713 EOF 00:19:53.713 )") 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.713 { 00:19:53.713 "params": { 00:19:53.713 "name": "Nvme$subsystem", 00:19:53.713 "trtype": "$TEST_TRANSPORT", 00:19:53.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.713 "adrfam": "ipv4", 00:19:53.713 "trsvcid": "$NVMF_PORT", 00:19:53.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.713 "hdgst": ${hdgst:-false}, 00:19:53.713 "ddgst": ${ddgst:-false} 00:19:53.713 }, 00:19:53.713 "method": "bdev_nvme_attach_controller" 00:19:53.713 } 00:19:53.713 EOF 00:19:53.713 )") 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.713 { 00:19:53.713 "params": { 00:19:53.713 "name": "Nvme$subsystem", 00:19:53.713 "trtype": "$TEST_TRANSPORT", 00:19:53.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.713 "adrfam": "ipv4", 00:19:53.713 "trsvcid": "$NVMF_PORT", 00:19:53.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.713 "hdgst": ${hdgst:-false}, 00:19:53.713 "ddgst": ${ddgst:-false} 00:19:53.713 }, 00:19:53.713 "method": "bdev_nvme_attach_controller" 00:19:53.713 } 00:19:53.713 EOF 00:19:53.713 )") 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.713 { 00:19:53.713 "params": { 00:19:53.713 "name": "Nvme$subsystem", 00:19:53.713 "trtype": "$TEST_TRANSPORT", 00:19:53.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.713 "adrfam": "ipv4", 00:19:53.713 "trsvcid": "$NVMF_PORT", 00:19:53.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.713 "hdgst": ${hdgst:-false}, 00:19:53.713 "ddgst": ${ddgst:-false} 00:19:53.713 }, 00:19:53.713 "method": "bdev_nvme_attach_controller" 00:19:53.713 } 00:19:53.713 EOF 00:19:53.713 )") 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.713 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.713 { 00:19:53.713 "params": { 00:19:53.713 "name": "Nvme$subsystem", 00:19:53.713 "trtype": "$TEST_TRANSPORT", 00:19:53.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.714 "adrfam": "ipv4", 00:19:53.714 "trsvcid": "$NVMF_PORT", 00:19:53.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.714 "hdgst": ${hdgst:-false}, 00:19:53.714 "ddgst": ${ddgst:-false} 00:19:53.714 }, 00:19:53.714 "method": "bdev_nvme_attach_controller" 00:19:53.714 } 00:19:53.714 EOF 00:19:53.714 )") 00:19:53.714 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.714 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.714 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.714 { 00:19:53.714 "params": { 00:19:53.714 "name": "Nvme$subsystem", 00:19:53.714 "trtype": "$TEST_TRANSPORT", 00:19:53.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.714 "adrfam": "ipv4", 00:19:53.714 "trsvcid": "$NVMF_PORT", 00:19:53.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.714 "hdgst": ${hdgst:-false}, 00:19:53.714 "ddgst": ${ddgst:-false} 00:19:53.714 }, 00:19:53.714 "method": "bdev_nvme_attach_controller" 00:19:53.714 } 00:19:53.714 EOF 00:19:53.714 )") 00:19:53.714 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.972 { 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme$subsystem", 00:19:53.972 "trtype": "$TEST_TRANSPORT", 00:19:53.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "$NVMF_PORT", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.972 "hdgst": ${hdgst:-false}, 00:19:53.972 "ddgst": ${ddgst:-false} 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 } 00:19:53.972 EOF 00:19:53.972 )") 00:19:53.972 [2024-11-04 16:31:20.538596] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:19:53.972 [2024-11-04 16:31:20.538652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869979 ] 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.972 { 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme$subsystem", 00:19:53.972 "trtype": "$TEST_TRANSPORT", 00:19:53.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "$NVMF_PORT", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.972 "hdgst": ${hdgst:-false}, 00:19:53.972 "ddgst": ${ddgst:-false} 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 } 00:19:53.972 EOF 00:19:53.972 )") 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.972 { 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme$subsystem", 00:19:53.972 "trtype": "$TEST_TRANSPORT", 00:19:53.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "$NVMF_PORT", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.972 "hdgst": ${hdgst:-false}, 00:19:53.972 "ddgst": ${ddgst:-false} 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 } 00:19:53.972 EOF 00:19:53.972 )") 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.972 { 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme$subsystem", 00:19:53.972 "trtype": "$TEST_TRANSPORT", 00:19:53.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "$NVMF_PORT", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.972 "hdgst": ${hdgst:-false}, 00:19:53.972 "ddgst": ${ddgst:-false} 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 } 00:19:53.972 EOF 00:19:53.972 )") 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:19:53.972 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme1", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme2", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme3", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme4", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme5", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme6", 00:19:53.972 "trtype": "tcp", 00:19:53.972 "traddr": "10.0.0.2", 00:19:53.972 "adrfam": "ipv4", 00:19:53.972 "trsvcid": "4420", 00:19:53.972 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.972 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.972 "hdgst": false, 00:19:53.972 "ddgst": false 00:19:53.972 }, 00:19:53.972 "method": "bdev_nvme_attach_controller" 00:19:53.972 },{ 00:19:53.972 "params": { 00:19:53.972 "name": "Nvme7", 00:19:53.972 "trtype": "tcp", 00:19:53.973 "traddr": "10.0.0.2", 00:19:53.973 "adrfam": "ipv4", 00:19:53.973 "trsvcid": "4420", 00:19:53.973 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.973 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.973 "hdgst": false, 00:19:53.973 "ddgst": false 00:19:53.973 }, 00:19:53.973 "method": "bdev_nvme_attach_controller" 00:19:53.973 },{ 00:19:53.973 "params": { 00:19:53.973 "name": "Nvme8", 00:19:53.973 "trtype": "tcp", 00:19:53.973 "traddr": "10.0.0.2", 00:19:53.973 "adrfam": "ipv4", 00:19:53.973 "trsvcid": "4420", 00:19:53.973 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.973 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.973 "hdgst": false, 00:19:53.973 "ddgst": false 00:19:53.973 }, 00:19:53.973 "method": "bdev_nvme_attach_controller" 00:19:53.973 },{ 00:19:53.973 "params": { 00:19:53.973 "name": "Nvme9", 00:19:53.973 "trtype": "tcp", 00:19:53.973 "traddr": "10.0.0.2", 00:19:53.973 "adrfam": "ipv4", 00:19:53.973 "trsvcid": "4420", 00:19:53.973 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.973 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.973 "hdgst": false, 00:19:53.973 "ddgst": false 00:19:53.973 }, 00:19:53.973 "method": "bdev_nvme_attach_controller" 00:19:53.973 },{ 00:19:53.973 "params": { 00:19:53.973 "name": "Nvme10", 00:19:53.973 "trtype": "tcp", 00:19:53.973 "traddr": "10.0.0.2", 00:19:53.973 "adrfam": "ipv4", 00:19:53.973 "trsvcid": "4420", 00:19:53.973 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.973 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.973 "hdgst": false, 00:19:53.973 "ddgst": false 00:19:53.973 }, 00:19:53.973 "method": "bdev_nvme_attach_controller" 00:19:53.973 }' 00:19:53.973 [2024-11-04 16:31:20.606160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.973 [2024-11-04 16:31:20.646838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.344 Running I/O for 10 seconds... 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=73 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 73 -ge 100 ']' 00:19:55.910 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2869979 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2869979 ']' 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2869979 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2869979 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2869979' 00:19:56.169 killing process with pid 2869979 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2869979 00:19:56.169 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2869979 00:19:56.169 Received shutdown signal, test time was about 0.958664 seconds 00:19:56.169 00:19:56.169 Latency(us) 00:19:56.169 [2024-11-04T15:31:22.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme1n1 : 0.94 272.86 17.05 0.00 0.00 232062.29 16227.96 212711.13 00:19:56.169 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme2n1 : 0.95 270.51 16.91 0.00 0.00 230065.25 30957.96 201726.05 00:19:56.169 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme3n1 : 0.92 282.57 17.66 0.00 0.00 215418.03 5430.13 211712.49 00:19:56.169 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme4n1 : 0.96 334.02 20.88 0.00 0.00 179675.04 11234.74 217704.35 00:19:56.169 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme5n1 : 0.95 269.22 16.83 0.00 0.00 219766.74 18225.25 218702.99 00:19:56.169 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme6n1 : 0.93 273.83 17.11 0.00 0.00 211984.09 16976.94 211712.49 00:19:56.169 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme7n1 : 0.93 275.83 17.24 0.00 0.00 206425.97 17101.78 214708.42 00:19:56.169 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme8n1 : 0.94 271.77 16.99 0.00 0.00 206052.45 16227.96 213709.78 00:19:56.169 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme9n1 : 0.95 278.23 17.39 0.00 0.00 196939.68 6147.90 217704.35 00:19:56.169 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.169 Verification LBA range: start 0x0 length 0x400 00:19:56.169 Nvme10n1 : 0.96 267.93 16.75 0.00 0.00 201716.30 15978.30 233682.65 00:19:56.169 [2024-11-04T15:31:22.993Z] =================================================================================================================== 00:19:56.169 [2024-11-04T15:31:22.993Z] Total : 2796.78 174.80 0.00 0.00 209237.93 5430.13 233682.65 00:19:56.427 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2869708 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.361 rmmod nvme_tcp 00:19:57.361 rmmod nvme_fabrics 00:19:57.361 rmmod nvme_keyring 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2869708 ']' 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2869708 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2869708 ']' 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2869708 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.361 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2869708 00:19:57.654 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.654 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.654 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2869708' 00:19:57.654 killing process with pid 2869708 00:19:57.654 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2869708 00:19:57.654 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2869708 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.913 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.447 00:20:00.447 real 0m7.335s 00:20:00.447 user 0m21.456s 00:20:00.447 sys 0m1.359s 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.447 ************************************ 00:20:00.447 END TEST nvmf_shutdown_tc2 00:20:00.447 ************************************ 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:00.447 ************************************ 00:20:00.447 START TEST nvmf_shutdown_tc3 00:20:00.447 ************************************ 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.447 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.448 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.448 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.448 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:20:00.448 00:20:00.448 --- 10.0.0.2 ping statistics --- 00:20:00.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.448 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:00.448 00:20:00.448 --- 10.0.0.1 ping statistics --- 00:20:00.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.448 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2871176 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2871176 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2871176 ']' 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.448 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.448 [2024-11-04 16:31:27.124024] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:00.448 [2024-11-04 16:31:27.124071] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.448 [2024-11-04 16:31:27.191199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.448 [2024-11-04 16:31:27.233247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.448 [2024-11-04 16:31:27.233284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.448 [2024-11-04 16:31:27.233291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.448 [2024-11-04 16:31:27.233297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.448 [2024-11-04 16:31:27.233302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.448 [2024-11-04 16:31:27.234941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.448 [2024-11-04 16:31:27.235030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.448 [2024-11-04 16:31:27.235160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.448 [2024-11-04 16:31:27.235161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.707 [2024-11-04 16:31:27.371563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.707 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.707 Malloc1 00:20:00.707 [2024-11-04 16:31:27.482810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.707 Malloc2 00:20:00.965 Malloc3 00:20:00.965 Malloc4 00:20:00.965 Malloc5 00:20:00.965 Malloc6 00:20:00.966 Malloc7 00:20:00.966 Malloc8 00:20:01.224 Malloc9 00:20:01.224 Malloc10 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2871301 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2871301 /var/tmp/bdevperf.sock 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2871301 ']' 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.224 { 00:20:01.224 "params": { 00:20:01.224 "name": "Nvme$subsystem", 00:20:01.224 "trtype": "$TEST_TRANSPORT", 00:20:01.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.224 "adrfam": "ipv4", 00:20:01.224 "trsvcid": "$NVMF_PORT", 00:20:01.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.224 "hdgst": ${hdgst:-false}, 00:20:01.224 "ddgst": ${ddgst:-false} 00:20:01.224 }, 00:20:01.224 "method": "bdev_nvme_attach_controller" 00:20:01.224 } 00:20:01.224 EOF 00:20:01.224 )") 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.224 { 00:20:01.224 "params": { 00:20:01.224 "name": "Nvme$subsystem", 00:20:01.224 "trtype": "$TEST_TRANSPORT", 00:20:01.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.224 "adrfam": "ipv4", 00:20:01.224 "trsvcid": "$NVMF_PORT", 00:20:01.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.224 "hdgst": ${hdgst:-false}, 00:20:01.224 "ddgst": ${ddgst:-false} 00:20:01.224 }, 00:20:01.224 "method": "bdev_nvme_attach_controller" 00:20:01.224 } 00:20:01.224 EOF 00:20:01.224 )") 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.224 { 00:20:01.224 "params": { 00:20:01.224 "name": "Nvme$subsystem", 00:20:01.224 "trtype": "$TEST_TRANSPORT", 00:20:01.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.224 "adrfam": "ipv4", 00:20:01.224 "trsvcid": "$NVMF_PORT", 00:20:01.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.224 "hdgst": ${hdgst:-false}, 00:20:01.224 "ddgst": ${ddgst:-false} 00:20:01.224 }, 00:20:01.224 "method": "bdev_nvme_attach_controller" 00:20:01.224 } 00:20:01.224 EOF 00:20:01.224 )") 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.224 { 00:20:01.224 "params": { 00:20:01.224 "name": "Nvme$subsystem", 00:20:01.224 "trtype": "$TEST_TRANSPORT", 00:20:01.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.224 "adrfam": "ipv4", 00:20:01.224 "trsvcid": "$NVMF_PORT", 00:20:01.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.224 "hdgst": ${hdgst:-false}, 00:20:01.224 "ddgst": ${ddgst:-false} 00:20:01.224 }, 00:20:01.224 "method": "bdev_nvme_attach_controller" 00:20:01.224 } 00:20:01.224 EOF 00:20:01.224 )") 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.224 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.224 { 00:20:01.224 "params": { 00:20:01.224 "name": "Nvme$subsystem", 00:20:01.224 "trtype": "$TEST_TRANSPORT", 00:20:01.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.224 "adrfam": "ipv4", 00:20:01.224 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.225 { 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme$subsystem", 00:20:01.225 "trtype": "$TEST_TRANSPORT", 00:20:01.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 [2024-11-04 16:31:27.949623] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.225 [2024-11-04 16:31:27.949672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871301 ] 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.225 { 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme$subsystem", 00:20:01.225 "trtype": "$TEST_TRANSPORT", 00:20:01.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.225 { 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme$subsystem", 00:20:01.225 "trtype": "$TEST_TRANSPORT", 00:20:01.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.225 { 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme$subsystem", 00:20:01.225 "trtype": "$TEST_TRANSPORT", 00:20:01.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.225 { 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme$subsystem", 00:20:01.225 "trtype": "$TEST_TRANSPORT", 00:20:01.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "$NVMF_PORT", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.225 "hdgst": ${hdgst:-false}, 00:20:01.225 "ddgst": ${ddgst:-false} 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 } 00:20:01.225 EOF 00:20:01.225 )") 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:01.225 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme1", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme2", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme3", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme4", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme5", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme6", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme7", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme8", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:01.225 "hdgst": false, 00:20:01.225 "ddgst": false 00:20:01.225 }, 00:20:01.225 "method": "bdev_nvme_attach_controller" 00:20:01.225 },{ 00:20:01.225 "params": { 00:20:01.225 "name": "Nvme9", 00:20:01.225 "trtype": "tcp", 00:20:01.225 "traddr": "10.0.0.2", 00:20:01.225 "adrfam": "ipv4", 00:20:01.225 "trsvcid": "4420", 00:20:01.225 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:01.225 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:01.225 "hdgst": false, 00:20:01.226 "ddgst": false 00:20:01.226 }, 00:20:01.226 "method": "bdev_nvme_attach_controller" 00:20:01.226 },{ 00:20:01.226 "params": { 00:20:01.226 "name": "Nvme10", 00:20:01.226 "trtype": "tcp", 00:20:01.226 "traddr": "10.0.0.2", 00:20:01.226 "adrfam": "ipv4", 00:20:01.226 "trsvcid": "4420", 00:20:01.226 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:01.226 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:01.226 "hdgst": false, 00:20:01.226 "ddgst": false 00:20:01.226 }, 00:20:01.226 "method": "bdev_nvme_attach_controller" 00:20:01.226 }' 00:20:01.226 [2024-11-04 16:31:28.014280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.483 [2024-11-04 16:31:28.055620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.853 Running I/O for 10 seconds... 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=17 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 17 -ge 100 ']' 00:20:03.111 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=84 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 84 -ge 100 ']' 00:20:03.369 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.627 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2871176 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2871176 ']' 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2871176 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2871176 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2871176' 00:20:03.899 killing process with pid 2871176 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2871176 00:20:03.899 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2871176 00:20:03.899 [2024-11-04 16:31:30.548089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.899 [2024-11-04 16:31:30.548367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.548538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88070 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.900 [2024-11-04 16:31:30.549868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.549995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfac60 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.550996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.901 [2024-11-04 16:31:30.551353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.551359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.551365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88560 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.552144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cd50 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.552286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d1b0 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.552375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.902 [2024-11-04 16:31:30.552428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.902 [2024-11-04 16:31:30.552434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05b0 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.902 [2024-11-04 16:31:30.553670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.553839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88a30 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.554998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.903 [2024-11-04 16:31:30.555269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88f20 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.555579] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.904 [2024-11-04 16:31:30.555649] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.904 [2024-11-04 16:31:30.555685] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.904 [2024-11-04 16:31:30.556642] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.904 [2024-11-04 16:31:30.556761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 16:31:30.556966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.556989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.556993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.556997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.557004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.557011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 [2024-11-04 16:31:30.557019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-11-04 16:31:30.557026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 16:31:30.557036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.904 the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.904 [2024-11-04 16:31:30.557046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.904 [2024-11-04 16:31:30.557051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-11-04 16:31:30.557082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 16:31:30.557091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with [2024-11-04 16:31:30.557257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1the state(6) to be set 00:20:03.905 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-11-04 16:31:30.557294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 16:31:30.557303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 [2024-11-04 16:31:30.557355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-11-04 16:31:30.557362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 16:31:30.557372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.905 the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.905 [2024-11-04 16:31:30.557383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.905 [2024-11-04 16:31:30.557388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.557391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.557400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.557408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89770 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.557417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.557796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.906 [2024-11-04 16:31:30.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.906 [2024-11-04 16:31:30.558406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.906 [2024-11-04 16:31:30.558542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.558800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a89c40 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:03.907 [2024-11-04 16:31:30.559227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fabb00 (9): Bad file descriptor 00:20:03.907 [2024-11-04 16:31:30.559531] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.907 [2024-11-04 16:31:30.559824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.559997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.560061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.907 [2024-11-04 16:31:30.560082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fabb00 with addr=10.0.0.2, port=4420 00:20:03.907 [2024-11-04 16:31:30.560091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fabb00 is same with the state(6) to be set 00:20:03.907 [2024-11-04 16:31:30.560166] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.907 [2024-11-04 16:31:30.561400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fabb00 (9): Bad file descriptor 00:20:03.907 [2024-11-04 16:31:30.561565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:03.907 [2024-11-04 16:31:30.561581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:03.907 [2024-11-04 16:31:30.561590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:03.907 [2024-11-04 16:31:30.561599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:03.907 [2024-11-04 16:31:30.561658] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:03.907 [2024-11-04 16:31:30.562151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7cd50 (9): Bad file descriptor 00:20:03.907 [2024-11-04 16:31:30.562187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.907 [2024-11-04 16:31:30.562198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.907 [2024-11-04 16:31:30.562205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7ac70 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.562267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae3a0 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.562358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91610 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.562432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d1b0 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.562457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8e30 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.562523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05b0 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.562548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.908 [2024-11-04 16:31:30.562597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.562610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6200 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.569589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:03.908 [2024-11-04 16:31:30.569816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.908 [2024-11-04 16:31:30.569831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fabb00 with addr=10.0.0.2, port=4420 00:20:03.908 [2024-11-04 16:31:30.569839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fabb00 is same with the state(6) to be set 00:20:03.908 [2024-11-04 16:31:30.569873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fabb00 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.569905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:03.908 [2024-11-04 16:31:30.569912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:03.908 [2024-11-04 16:31:30.569920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:03.908 [2024-11-04 16:31:30.569926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:03.908 [2024-11-04 16:31:30.572184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7ac70 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.572200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae3a0 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.572223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a91610 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.572242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa8e30 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.572261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6200 (9): Bad file descriptor 00:20:03.908 [2024-11-04 16:31:30.572362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.908 [2024-11-04 16:31:30.572448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.908 [2024-11-04 16:31:30.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.572987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.572993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.573007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.573017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.573024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.909 [2024-11-04 16:31:30.573031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.909 [2024-11-04 16:31:30.573038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.573298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.573306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d814e0 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.573567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a130 is same with the state(6) to be set 00:20:03.910 [2024-11-04 16:31:30.574266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.574280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.574291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.574306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.910 [2024-11-04 16:31:30.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.910 [2024-11-04 16:31:30.574336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.574845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.574851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.580906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.580917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.580926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.580933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.580941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.580947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.580955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.580962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.911 [2024-11-04 16:31:30.580970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.911 [2024-11-04 16:31:30.580976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.580984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.580990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.580999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.581268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.581276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d826c0 is same with the state(6) to be set 00:20:03.912 [2024-11-04 16:31:30.582260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.912 [2024-11-04 16:31:30.582535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.912 [2024-11-04 16:31:30.582542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.582987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.582995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.583001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.583010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.583016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.583023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.583030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.583038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.583044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.583052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.913 [2024-11-04 16:31:30.583059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.913 [2024-11-04 16:31:30.583067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ecc450 is same with the state(6) to be set 00:20:03.914 [2024-11-04 16:31:30.583310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.914 [2024-11-04 16:31:30.583742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.914 [2024-11-04 16:31:30.583749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.583984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.583990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.915 [2024-11-04 16:31:30.584242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.584250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209b430 is same with the state(6) to be set 00:20:03.915 [2024-11-04 16:31:30.585436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:03.915 [2024-11-04 16:31:30.585456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:03.915 [2024-11-04 16:31:30.585474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:03.915 [2024-11-04 16:31:30.585567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.915 [2024-11-04 16:31:30.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.585591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.915 [2024-11-04 16:31:30.585607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.585617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.915 [2024-11-04 16:31:30.585626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.915 [2024-11-04 16:31:30.585635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.915 [2024-11-04 16:31:30.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.585652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8400 is same with the state(6) to be set 00:20:03.916 [2024-11-04 16:31:30.585688] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:03.916 [2024-11-04 16:31:30.586968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:03.916 [2024-11-04 16:31:30.587182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.916 [2024-11-04 16:31:30.587201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7d1b0 with addr=10.0.0.2, port=4420 00:20:03.916 [2024-11-04 16:31:30.587218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d1b0 is same with the state(6) to be set 00:20:03.916 [2024-11-04 16:31:30.587390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.916 [2024-11-04 16:31:30.587405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cd50 with addr=10.0.0.2, port=4420 00:20:03.916 [2024-11-04 16:31:30.587414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cd50 is same with the state(6) to be set 00:20:03.916 [2024-11-04 16:31:30.587621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.916 [2024-11-04 16:31:30.587636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05b0 with addr=10.0.0.2, port=4420 00:20:03.916 [2024-11-04 16:31:30.587645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05b0 is same with the state(6) to be set 00:20:03.916 [2024-11-04 16:31:30.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.916 [2024-11-04 16:31:30.588980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-11-04 16:31:30.588991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.589608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.589619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1e00 is same with the state(6) to be set 00:20:03.917 [2024-11-04 16:31:30.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.590948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.590964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.590974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.590987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.590995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.591007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.591016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.917 [2024-11-04 16:31:30.591027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.917 [2024-11-04 16:31:30.591036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.918 [2024-11-04 16:31:30.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.918 [2024-11-04 16:31:30.591874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.591895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.591922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.591943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.591963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.591984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.591995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.592292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.592301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3210 is same with the state(6) to be set 00:20:03.919 [2024-11-04 16:31:30.593615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.593984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.593993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.594004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.594014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.919 [2024-11-04 16:31:30.594026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.919 [2024-11-04 16:31:30.594037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.920 [2024-11-04 16:31:30.594809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.920 [2024-11-04 16:31:30.594816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.594824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.594830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.594838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.594844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.594852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.594860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.594869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.594877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.594885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f825d0 is same with the state(6) to be set 00:20:03.921 [2024-11-04 16:31:30.595866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.595986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.595993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.921 [2024-11-04 16:31:30.596408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.921 [2024-11-04 16:31:30.596416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.922 [2024-11-04 16:31:30.596874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.922 [2024-11-04 16:31:30.596881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f83a50 is same with the state(6) to be set 00:20:03.922 [2024-11-04 16:31:30.598056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:03.922 [2024-11-04 16:31:30.598074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:03.922 [2024-11-04 16:31:30.598084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:03.922 [2024-11-04 16:31:30.598092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:03.922 task offset: 26624 on job bdev=Nvme5n1 fails 00:20:03.922 00:20:03.922 Latency(us) 00:20:03.922 [2024-11-04T15:31:30.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.922 Job: Nvme1n1 ended in about 0.94 seconds with error 00:20:03.922 Verification LBA range: start 0x0 length 0x400 00:20:03.922 Nvme1n1 : 0.94 203.22 12.70 67.74 0.00 233825.28 17975.59 240673.16 00:20:03.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.922 Job: Nvme2n1 ended in about 0.95 seconds with error 00:20:03.922 Verification LBA range: start 0x0 length 0x400 00:20:03.922 Nvme2n1 : 0.95 134.35 8.40 67.17 0.00 309253.20 18849.40 271631.12 00:20:03.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.922 Job: Nvme3n1 ended in about 0.96 seconds with error 00:20:03.922 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme3n1 : 0.96 210.16 13.14 66.59 0.00 221461.47 18724.57 245666.38 00:20:03.923 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme4n1 ended in about 0.96 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme4n1 : 0.96 203.35 12.71 66.40 0.00 223358.45 29959.31 231685.36 00:20:03.923 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme5n1 ended in about 0.93 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme5n1 : 0.93 206.53 12.91 68.84 0.00 214393.42 1950.48 249660.95 00:20:03.923 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme6n1 ended in about 0.97 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme6n1 : 0.97 198.69 12.42 66.23 0.00 219713.34 17226.61 229688.08 00:20:03.923 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme7n1 ended in about 0.97 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme7n1 : 0.97 198.28 12.39 66.09 0.00 216315.73 14917.24 245666.38 00:20:03.923 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme8n1 ended in about 0.96 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme8n1 : 0.96 205.79 12.86 66.86 0.00 205646.61 10860.25 251658.24 00:20:03.923 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme9n1 : 0.93 206.00 12.88 0.00 0.00 266099.89 23093.64 245666.38 00:20:03.923 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.923 Job: Nvme10n1 ended in about 0.96 seconds with error 00:20:03.923 Verification LBA range: start 0x0 length 0x400 00:20:03.923 Nvme10n1 : 0.96 133.93 8.37 66.96 0.00 269008.05 20846.69 269633.83 00:20:03.923 [2024-11-04T15:31:30.747Z] =================================================================================================================== 00:20:03.923 [2024-11-04T15:31:30.747Z] Total : 1900.30 118.77 602.89 0.00 234244.02 1950.48 271631.12 00:20:03.923 [2024-11-04 16:31:30.626709] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.923 [2024-11-04 16:31:30.627077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.627101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff6200 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.627114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6200 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.627130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d1b0 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.627142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7cd50 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.627153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05b0 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.627189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd8400 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.627213] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:03.923 [2024-11-04 16:31:30.627226] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:03.923 [2024-11-04 16:31:30.627237] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:03.923 [2024-11-04 16:31:30.627247] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:03.923 [2024-11-04 16:31:30.627256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6200 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.627795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:03.923 [2024-11-04 16:31:30.628016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.628034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fabb00 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.628044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fabb00 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.628268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.628280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7ac70 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.628289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7ac70 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.628488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.628501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa8e30 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.628509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa8e30 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.628662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.628674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae3a0 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.628681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae3a0 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.628691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.628698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.628708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.628717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.628726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.628731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.628738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.628745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.628751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.628757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.628766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.628772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.628814] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:03.923 [2024-11-04 16:31:30.629874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.923 [2024-11-04 16:31:30.629891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a91610 with addr=10.0.0.2, port=4420 00:20:03.923 [2024-11-04 16:31:30.629900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91610 is same with the state(6) to be set 00:20:03.923 [2024-11-04 16:31:30.629914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fabb00 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.629924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7ac70 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.629933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa8e30 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.629943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae3a0 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.629952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.629959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.629965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.629972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.630035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:03.923 [2024-11-04 16:31:30.630047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:03.923 [2024-11-04 16:31:30.630057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:03.923 [2024-11-04 16:31:30.630066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:03.923 [2024-11-04 16:31:30.630097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a91610 (9): Bad file descriptor 00:20:03.923 [2024-11-04 16:31:30.630106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.630113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.630121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.630128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.630136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.630143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.630149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.630155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.630162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.630170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.630177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:03.923 [2024-11-04 16:31:30.630183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:03.923 [2024-11-04 16:31:30.630189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:03.923 [2024-11-04 16:31:30.630196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:03.923 [2024-11-04 16:31:30.630203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.630210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:03.924 [2024-11-04 16:31:30.630508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.924 [2024-11-04 16:31:30.630521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd8400 with addr=10.0.0.2, port=4420 00:20:03.924 [2024-11-04 16:31:30.630529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8400 is same with the state(6) to be set 00:20:03.924 [2024-11-04 16:31:30.630678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.924 [2024-11-04 16:31:30.630688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe05b0 with addr=10.0.0.2, port=4420 00:20:03.924 [2024-11-04 16:31:30.630695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe05b0 is same with the state(6) to be set 00:20:03.924 [2024-11-04 16:31:30.630892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.924 [2024-11-04 16:31:30.630904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7cd50 with addr=10.0.0.2, port=4420 00:20:03.924 [2024-11-04 16:31:30.630911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7cd50 is same with the state(6) to be set 00:20:03.924 [2024-11-04 16:31:30.631137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.924 [2024-11-04 16:31:30.631147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7d1b0 with addr=10.0.0.2, port=4420 00:20:03.924 [2024-11-04 16:31:30.631154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d1b0 is same with the state(6) to be set 00:20:03.924 [2024-11-04 16:31:30.631161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:03.924 [2024-11-04 16:31:30.631166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:03.924 [2024-11-04 16:31:30.631173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.631180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:03.924 [2024-11-04 16:31:30.631209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd8400 (9): Bad file descriptor 00:20:03.924 [2024-11-04 16:31:30.631219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe05b0 (9): Bad file descriptor 00:20:03.924 [2024-11-04 16:31:30.631228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7cd50 (9): Bad file descriptor 00:20:03.924 [2024-11-04 16:31:30.631236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d1b0 (9): Bad file descriptor 00:20:03.924 [2024-11-04 16:31:30.631258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:03.924 [2024-11-04 16:31:30.631265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:03.924 [2024-11-04 16:31:30.631276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.631283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:03.924 [2024-11-04 16:31:30.631289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:03.924 [2024-11-04 16:31:30.631295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:03.924 [2024-11-04 16:31:30.631301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.631306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:03.924 [2024-11-04 16:31:30.631314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:03.924 [2024-11-04 16:31:30.631320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:03.924 [2024-11-04 16:31:30.631328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.631334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:03.924 [2024-11-04 16:31:30.631342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:03.924 [2024-11-04 16:31:30.631349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:03.924 [2024-11-04 16:31:30.631357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:03.924 [2024-11-04 16:31:30.631363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:04.183 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2871301 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2871301 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2871301 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.561 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.561 rmmod nvme_tcp 00:20:05.561 rmmod nvme_fabrics 00:20:05.561 rmmod nvme_keyring 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2871176 ']' 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2871176 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2871176 ']' 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2871176 00:20:05.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2871176) - No such process 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2871176 is not found' 00:20:05.561 Process with pid 2871176 is not found 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.561 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.467 00:20:07.467 real 0m7.375s 00:20:07.467 user 0m17.549s 00:20:07.467 sys 0m1.348s 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.467 ************************************ 00:20:07.467 END TEST nvmf_shutdown_tc3 00:20:07.467 ************************************ 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:07.467 ************************************ 00:20:07.467 START TEST nvmf_shutdown_tc4 00:20:07.467 ************************************ 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:07.467 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.468 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.468 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.468 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:20:07.728 00:20:07.728 --- 10.0.0.2 ping statistics --- 00:20:07.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.728 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:20:07.728 00:20:07.728 --- 10.0.0.1 ping statistics --- 00:20:07.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.728 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2872564 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2872564 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2872564 ']' 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.728 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:07.987 [2024-11-04 16:31:34.561120] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:07.988 [2024-11-04 16:31:34.561163] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.988 [2024-11-04 16:31:34.627906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.988 [2024-11-04 16:31:34.667611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.988 [2024-11-04 16:31:34.667650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.988 [2024-11-04 16:31:34.667657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.988 [2024-11-04 16:31:34.667662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.988 [2024-11-04 16:31:34.667667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.988 [2024-11-04 16:31:34.669150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.988 [2024-11-04 16:31:34.669218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.988 [2024-11-04 16:31:34.669307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.988 [2024-11-04 16:31:34.669308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.988 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:08.247 [2024-11-04 16:31:34.817352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.247 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.248 16:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:08.248 Malloc1 00:20:08.248 [2024-11-04 16:31:34.925921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.248 Malloc2 00:20:08.248 Malloc3 00:20:08.248 Malloc4 00:20:08.506 Malloc5 00:20:08.506 Malloc6 00:20:08.506 Malloc7 00:20:08.506 Malloc8 00:20:08.506 Malloc9 00:20:08.506 Malloc10 00:20:08.506 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.506 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:08.506 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.506 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:08.764 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2872646 00:20:08.764 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:08.764 16:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:08.764 [2024-11-04 16:31:35.420130] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2872564 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2872564 ']' 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2872564 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2872564 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.041 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2872564' 00:20:14.041 killing process with pid 2872564 00:20:14.042 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2872564 00:20:14.042 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2872564 00:20:14.042 [2024-11-04 16:31:40.421763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.421861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833e50 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834320 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347f0 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347f0 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347f0 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347f0 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.422982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18347f0 is same with the state(6) to be set 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 [2024-11-04 16:31:40.433151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 [2024-11-04 16:31:40.434070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 [2024-11-04 16:31:40.434297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 [2024-11-04 16:31:40.434321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 starting I/O failed: -6 00:20:14.042 [2024-11-04 16:31:40.434329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.434336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 [2024-11-04 16:31:40.434342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 [2024-11-04 16:31:40.434348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838060 is same with the state(6) to be set 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.042 Write completed with error (sct=0, sc=8) 00:20:14.042 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 [2024-11-04 16:31:40.435079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 Write completed with error (sct=0, sc=8) 00:20:14.043 starting I/O failed: -6 00:20:14.043 [2024-11-04 16:31:40.436653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.043 NVMe io qpair process completion error 00:20:14.043 [2024-11-04 16:31:40.437090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3ca0 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3ca0 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3ca0 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3ca0 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3ca0 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.043 [2024-11-04 16:31:40.437744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.437750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4170 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.439406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6890 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 [2024-11-04 16:31:40.439427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6890 is same with the state(6) to be set 00:20:14.044 starting I/O failed: -6 00:20:14.044 [2024-11-04 16:31:40.439438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6890 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.439445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6890 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 [2024-11-04 16:31:40.439451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6890 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 [2024-11-04 16:31:40.439660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with the state(6) to be set 00:20:14.044 starting I/O failed: -6 00:20:14.044 [2024-11-04 16:31:40.439681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 [2024-11-04 16:31:40.439688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.439695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.439701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with Write completed with error (sct=0, sc=8) 00:20:14.044 the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.439709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6d60 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 [2024-11-04 16:31:40.440001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.440024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.440033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.440029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.044 [2024-11-04 16:31:40.440042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.440049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 [2024-11-04 16:31:40.440059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c5ef0 is same with the state(6) to be set 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 [2024-11-04 16:31:40.440924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 Write completed with error (sct=0, sc=8) 00:20:14.044 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 [2024-11-04 16:31:40.441910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 [2024-11-04 16:31:40.443427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.045 NVMe io qpair process completion error 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 starting I/O failed: -6 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.045 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 [2024-11-04 16:31:40.444418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 [2024-11-04 16:31:40.445320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 [2024-11-04 16:31:40.446335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.046 starting I/O failed: -6 00:20:14.046 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 [2024-11-04 16:31:40.448357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.047 NVMe io qpair process completion error 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 starting I/O failed: -6 00:20:14.047 Write completed with error (sct=0, sc=8) 00:20:14.047 [2024-11-04 16:31:40.449427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 [2024-11-04 16:31:40.450343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 [2024-11-04 16:31:40.451319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.048 Write completed with error (sct=0, sc=8) 00:20:14.048 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 [2024-11-04 16:31:40.455557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.049 NVMe io qpair process completion error 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 [2024-11-04 16:31:40.456562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 [2024-11-04 16:31:40.457448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.049 starting I/O failed: -6 00:20:14.049 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 [2024-11-04 16:31:40.458467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 [2024-11-04 16:31:40.462726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.050 NVMe io qpair process completion error 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 starting I/O failed: -6 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.050 Write completed with error (sct=0, sc=8) 00:20:14.051 [2024-11-04 16:31:40.463843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 [2024-11-04 16:31:40.464673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 [2024-11-04 16:31:40.465772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.051 Write completed with error (sct=0, sc=8) 00:20:14.051 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 [2024-11-04 16:31:40.467692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.052 NVMe io qpair process completion error 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.052 Write completed with error (sct=0, sc=8) 00:20:14.052 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.053 starting I/O failed: -6 00:20:14.053 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 [2024-11-04 16:31:40.472546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 [2024-11-04 16:31:40.473486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 [2024-11-04 16:31:40.474510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.054 starting I/O failed: -6 00:20:14.054 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 [2024-11-04 16:31:40.477944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.055 NVMe io qpair process completion error 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 [2024-11-04 16:31:40.479049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 [2024-11-04 16:31:40.479909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 starting I/O failed: -6 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.055 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 [2024-11-04 16:31:40.480983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 [2024-11-04 16:31:40.485048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:14.056 NVMe io qpair process completion error 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 Write completed with error (sct=0, sc=8) 00:20:14.056 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 [2024-11-04 16:31:40.487372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.057 starting I/O failed: -6 00:20:14.057 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 Write completed with error (sct=0, sc=8) 00:20:14.058 starting I/O failed: -6 00:20:14.058 [2024-11-04 16:31:40.489268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.058 NVMe io qpair process completion error 00:20:14.058 Initializing NVMe Controllers 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:14.058 Controller IO queue size 128, less than required. 00:20:14.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:14.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:14.058 Initialization complete. Launching workers. 00:20:14.058 ======================================================== 00:20:14.058 Latency(us) 00:20:14.058 Device Information : IOPS MiB/s Average min max 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2222.87 95.51 57581.23 837.53 111819.29 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2186.16 93.94 58559.12 986.73 111783.93 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2181.15 93.72 58731.62 887.98 106190.76 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2195.54 94.34 57741.93 507.32 104378.48 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2224.95 95.60 57562.21 694.15 99109.80 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2195.75 94.35 57729.69 903.70 101139.58 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2166.55 93.09 58517.51 928.65 98924.94 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2161.13 92.86 58681.23 740.72 98974.90 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2186.78 93.96 58035.96 731.35 102756.39 00:20:14.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2189.08 94.06 58018.96 738.38 107753.59 00:20:14.058 ======================================================== 00:20:14.059 Total : 21909.95 941.44 58112.58 507.32 111819.29 00:20:14.059 00:20:14.059 [2024-11-04 16:31:40.492278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397ae0 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396a70 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396410 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397900 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396740 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395890 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397720 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395560 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395ef0 is same with the state(6) to be set 00:20:14.059 [2024-11-04 16:31:40.492559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395bc0 is same with the state(6) to be set 00:20:14.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:14.059 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2872646 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2872646 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2872646 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:14.995 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.255 rmmod nvme_tcp 00:20:15.255 rmmod nvme_fabrics 00:20:15.255 rmmod nvme_keyring 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2872564 ']' 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2872564 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2872564 ']' 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2872564 00:20:15.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2872564) - No such process 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2872564 is not found' 00:20:15.255 Process with pid 2872564 is not found 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.255 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.159 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.159 00:20:17.159 real 0m9.768s 00:20:17.159 user 0m24.855s 00:20:17.159 sys 0m5.258s 00:20:17.159 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.159 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:17.159 ************************************ 00:20:17.159 END TEST nvmf_shutdown_tc4 00:20:17.159 ************************************ 00:20:17.418 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:17.418 00:20:17.418 real 0m39.715s 00:20:17.418 user 1m36.855s 00:20:17.418 sys 0m13.823s 00:20:17.418 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.418 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.418 ************************************ 00:20:17.418 END TEST nvmf_shutdown 00:20:17.418 ************************************ 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.418 ************************************ 00:20:17.418 START TEST nvmf_nsid 00:20:17.418 ************************************ 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:17.418 * Looking for test storage... 00:20:17.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.418 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.419 --rc genhtml_branch_coverage=1 00:20:17.419 --rc genhtml_function_coverage=1 00:20:17.419 --rc genhtml_legend=1 00:20:17.419 --rc geninfo_all_blocks=1 00:20:17.419 --rc geninfo_unexecuted_blocks=1 00:20:17.419 00:20:17.419 ' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.419 --rc genhtml_branch_coverage=1 00:20:17.419 --rc genhtml_function_coverage=1 00:20:17.419 --rc genhtml_legend=1 00:20:17.419 --rc geninfo_all_blocks=1 00:20:17.419 --rc geninfo_unexecuted_blocks=1 00:20:17.419 00:20:17.419 ' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.419 --rc genhtml_branch_coverage=1 00:20:17.419 --rc genhtml_function_coverage=1 00:20:17.419 --rc genhtml_legend=1 00:20:17.419 --rc geninfo_all_blocks=1 00:20:17.419 --rc geninfo_unexecuted_blocks=1 00:20:17.419 00:20:17.419 ' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.419 --rc genhtml_branch_coverage=1 00:20:17.419 --rc genhtml_function_coverage=1 00:20:17.419 --rc genhtml_legend=1 00:20:17.419 --rc geninfo_all_blocks=1 00:20:17.419 --rc geninfo_unexecuted_blocks=1 00:20:17.419 00:20:17.419 ' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.419 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.678 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.679 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.248 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.248 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:24.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:24.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:24.249 Found net devices under 0000:86:00.0: cvl_0_0 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:24.249 Found net devices under 0000:86:00.1: cvl_0_1 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.249 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:24.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:20:24.249 00:20:24.249 --- 10.0.0.2 ping statistics --- 00:20:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.249 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:24.249 00:20:24.249 --- 10.0.0.1 ping statistics --- 00:20:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.249 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.249 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2877296 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2877296 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2877296 ']' 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.250 [2024-11-04 16:31:50.257385] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:24.250 [2024-11-04 16:31:50.257430] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.250 [2024-11-04 16:31:50.321381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.250 [2024-11-04 16:31:50.359902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.250 [2024-11-04 16:31:50.359933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.250 [2024-11-04 16:31:50.359940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.250 [2024-11-04 16:31:50.359946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.250 [2024-11-04 16:31:50.359952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.250 [2024-11-04 16:31:50.360484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2877320 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=891eca30-8a3e-47f6-bca1-33417530de9f 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a79855f6-2137-447f-a993-e4b00c04bc2d 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=23d40f2b-4d3d-4d05-af9e-c3d08a19aecd 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.250 null0 00:20:24.250 null1 00:20:24.250 [2024-11-04 16:31:50.543782] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:24.250 [2024-11-04 16:31:50.543825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877320 ] 00:20:24.250 null2 00:20:24.250 [2024-11-04 16:31:50.550671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.250 [2024-11-04 16:31:50.574872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.250 [2024-11-04 16:31:50.605278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2877320 /var/tmp/tgt2.sock 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2877320 ']' 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:24.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:24.250 [2024-11-04 16:31:50.650039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:24.250 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:24.509 [2024-11-04 16:31:51.167589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.509 [2024-11-04 16:31:51.183710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:24.509 nvme0n1 nvme0n2 00:20:24.509 nvme1n1 00:20:24.509 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:24.509 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:24.509 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:25.886 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 891eca30-8a3e-47f6-bca1-33417530de9f 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=891eca308a3e47f6bca133417530de9f 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 891ECA308A3E47F6BCA133417530DE9F 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 891ECA308A3E47F6BCA133417530DE9F == \8\9\1\E\C\A\3\0\8\A\3\E\4\7\F\6\B\C\A\1\3\3\4\1\7\5\3\0\D\E\9\F ]] 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a79855f6-2137-447f-a993-e4b00c04bc2d 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a79855f62137447fa993e4b00c04bc2d 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A79855F62137447FA993E4B00C04BC2D 00:20:26.822 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A79855F62137447FA993E4B00C04BC2D == \A\7\9\8\5\5\F\6\2\1\3\7\4\4\7\F\A\9\9\3\E\4\B\0\0\C\0\4\B\C\2\D ]] 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 23d40f2b-4d3d-4d05-af9e-c3d08a19aecd 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=23d40f2b4d3d4d05af9ec3d08a19aecd 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 23D40F2B4D3D4D05AF9EC3D08A19AECD 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 23D40F2B4D3D4D05AF9EC3D08A19AECD == \2\3\D\4\0\F\2\B\4\D\3\D\4\D\0\5\A\F\9\E\C\3\D\0\8\A\1\9\A\E\C\D ]] 00:20:26.823 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2877320 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2877320 ']' 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2877320 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2877320 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2877320' 00:20:27.081 killing process with pid 2877320 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2877320 00:20:27.081 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2877320 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.339 rmmod nvme_tcp 00:20:27.339 rmmod nvme_fabrics 00:20:27.339 rmmod nvme_keyring 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2877296 ']' 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2877296 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2877296 ']' 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2877296 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.339 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2877296 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2877296' 00:20:27.598 killing process with pid 2877296 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2877296 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2877296 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.598 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.133 00:20:30.133 real 0m12.359s 00:20:30.133 user 0m9.646s 00:20:30.133 sys 0m5.460s 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:30.133 ************************************ 00:20:30.133 END TEST nvmf_nsid 00:20:30.133 ************************************ 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:30.133 00:20:30.133 real 11m42.319s 00:20:30.133 user 25m16.697s 00:20:30.133 sys 3m35.510s 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.133 16:31:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.133 ************************************ 00:20:30.133 END TEST nvmf_target_extra 00:20:30.133 ************************************ 00:20:30.133 16:31:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:30.133 16:31:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:30.133 16:31:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.133 16:31:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.133 ************************************ 00:20:30.133 START TEST nvmf_host 00:20:30.133 ************************************ 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:30.133 * Looking for test storage... 00:20:30.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.133 --rc genhtml_branch_coverage=1 00:20:30.133 --rc genhtml_function_coverage=1 00:20:30.133 --rc genhtml_legend=1 00:20:30.133 --rc geninfo_all_blocks=1 00:20:30.133 --rc geninfo_unexecuted_blocks=1 00:20:30.133 00:20:30.133 ' 00:20:30.133 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:30.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.134 --rc genhtml_branch_coverage=1 00:20:30.134 --rc genhtml_function_coverage=1 00:20:30.134 --rc genhtml_legend=1 00:20:30.134 --rc geninfo_all_blocks=1 00:20:30.134 --rc geninfo_unexecuted_blocks=1 00:20:30.134 00:20:30.134 ' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:30.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.134 --rc genhtml_branch_coverage=1 00:20:30.134 --rc genhtml_function_coverage=1 00:20:30.134 --rc genhtml_legend=1 00:20:30.134 --rc geninfo_all_blocks=1 00:20:30.134 --rc geninfo_unexecuted_blocks=1 00:20:30.134 00:20:30.134 ' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:30.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.134 --rc genhtml_branch_coverage=1 00:20:30.134 --rc genhtml_function_coverage=1 00:20:30.134 --rc genhtml_legend=1 00:20:30.134 --rc geninfo_all_blocks=1 00:20:30.134 --rc geninfo_unexecuted_blocks=1 00:20:30.134 00:20:30.134 ' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.134 ************************************ 00:20:30.134 START TEST nvmf_multicontroller 00:20:30.134 ************************************ 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:30.134 * Looking for test storage... 00:20:30.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:30.134 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:30.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.135 --rc genhtml_branch_coverage=1 00:20:30.135 --rc genhtml_function_coverage=1 00:20:30.135 --rc genhtml_legend=1 00:20:30.135 --rc geninfo_all_blocks=1 00:20:30.135 --rc geninfo_unexecuted_blocks=1 00:20:30.135 00:20:30.135 ' 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:30.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.135 --rc genhtml_branch_coverage=1 00:20:30.135 --rc genhtml_function_coverage=1 00:20:30.135 --rc genhtml_legend=1 00:20:30.135 --rc geninfo_all_blocks=1 00:20:30.135 --rc geninfo_unexecuted_blocks=1 00:20:30.135 00:20:30.135 ' 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:30.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.135 --rc genhtml_branch_coverage=1 00:20:30.135 --rc genhtml_function_coverage=1 00:20:30.135 --rc genhtml_legend=1 00:20:30.135 --rc geninfo_all_blocks=1 00:20:30.135 --rc geninfo_unexecuted_blocks=1 00:20:30.135 00:20:30.135 ' 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:30.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.135 --rc genhtml_branch_coverage=1 00:20:30.135 --rc genhtml_function_coverage=1 00:20:30.135 --rc genhtml_legend=1 00:20:30.135 --rc geninfo_all_blocks=1 00:20:30.135 --rc geninfo_unexecuted_blocks=1 00:20:30.135 00:20:30.135 ' 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.135 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.394 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.395 16:31:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:35.674 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:35.674 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:35.674 Found net devices under 0000:86:00.0: cvl_0_0 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:35.674 Found net devices under 0000:86:00.1: cvl_0_1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.674 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:20:35.675 00:20:35.675 --- 10.0.0.2 ping statistics --- 00:20:35.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.675 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:20:35.675 00:20:35.675 --- 10.0.0.1 ping statistics --- 00:20:35.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.675 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2881505 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2881505 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2881505 ']' 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.675 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.675 [2024-11-04 16:32:02.396041] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:35.675 [2024-11-04 16:32:02.396090] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.675 [2024-11-04 16:32:02.462955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:35.937 [2024-11-04 16:32:02.506269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.938 [2024-11-04 16:32:02.506303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.938 [2024-11-04 16:32:02.506313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.938 [2024-11-04 16:32:02.506321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.938 [2024-11-04 16:32:02.506327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.938 [2024-11-04 16:32:02.507745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.938 [2024-11-04 16:32:02.507837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.938 [2024-11-04 16:32:02.507840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 [2024-11-04 16:32:02.643772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 Malloc0 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 [2024-11-04 16:32:02.697624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 [2024-11-04 16:32:02.705557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 Malloc1 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2881645 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:35.938 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2881645 /var/tmp/bdevperf.sock 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2881645 ']' 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.201 16:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.201 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.201 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:36.201 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:36.201 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.201 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.565 NVMe0n1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.565 1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.565 request: 00:20:36.565 { 00:20:36.565 "name": "NVMe0", 00:20:36.565 "trtype": "tcp", 00:20:36.565 "traddr": "10.0.0.2", 00:20:36.565 "adrfam": "ipv4", 00:20:36.565 "trsvcid": "4420", 00:20:36.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.565 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:36.565 "hostaddr": "10.0.0.1", 00:20:36.565 "prchk_reftag": false, 00:20:36.565 "prchk_guard": false, 00:20:36.565 "hdgst": false, 00:20:36.565 "ddgst": false, 00:20:36.565 "allow_unrecognized_csi": false, 00:20:36.565 "method": "bdev_nvme_attach_controller", 00:20:36.565 "req_id": 1 00:20:36.565 } 00:20:36.565 Got JSON-RPC error response 00:20:36.565 response: 00:20:36.565 { 00:20:36.565 "code": -114, 00:20:36.565 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:36.565 } 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.565 request: 00:20:36.565 { 00:20:36.565 "name": "NVMe0", 00:20:36.565 "trtype": "tcp", 00:20:36.565 "traddr": "10.0.0.2", 00:20:36.565 "adrfam": "ipv4", 00:20:36.565 "trsvcid": "4420", 00:20:36.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.565 "hostaddr": "10.0.0.1", 00:20:36.565 "prchk_reftag": false, 00:20:36.565 "prchk_guard": false, 00:20:36.565 "hdgst": false, 00:20:36.565 "ddgst": false, 00:20:36.565 "allow_unrecognized_csi": false, 00:20:36.565 "method": "bdev_nvme_attach_controller", 00:20:36.565 "req_id": 1 00:20:36.565 } 00:20:36.565 Got JSON-RPC error response 00:20:36.565 response: 00:20:36.565 { 00:20:36.565 "code": -114, 00:20:36.565 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:36.565 } 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.565 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.566 request: 00:20:36.566 { 00:20:36.566 "name": "NVMe0", 00:20:36.566 "trtype": "tcp", 00:20:36.566 "traddr": "10.0.0.2", 00:20:36.566 "adrfam": "ipv4", 00:20:36.566 "trsvcid": "4420", 00:20:36.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.566 "hostaddr": "10.0.0.1", 00:20:36.566 "prchk_reftag": false, 00:20:36.566 "prchk_guard": false, 00:20:36.566 "hdgst": false, 00:20:36.566 "ddgst": false, 00:20:36.566 "multipath": "disable", 00:20:36.566 "allow_unrecognized_csi": false, 00:20:36.566 "method": "bdev_nvme_attach_controller", 00:20:36.566 "req_id": 1 00:20:36.566 } 00:20:36.566 Got JSON-RPC error response 00:20:36.566 response: 00:20:36.566 { 00:20:36.566 "code": -114, 00:20:36.566 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:36.566 } 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.566 request: 00:20:36.566 { 00:20:36.566 "name": "NVMe0", 00:20:36.566 "trtype": "tcp", 00:20:36.566 "traddr": "10.0.0.2", 00:20:36.566 "adrfam": "ipv4", 00:20:36.566 "trsvcid": "4420", 00:20:36.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.566 "hostaddr": "10.0.0.1", 00:20:36.566 "prchk_reftag": false, 00:20:36.566 "prchk_guard": false, 00:20:36.566 "hdgst": false, 00:20:36.566 "ddgst": false, 00:20:36.566 "multipath": "failover", 00:20:36.566 "allow_unrecognized_csi": false, 00:20:36.566 "method": "bdev_nvme_attach_controller", 00:20:36.566 "req_id": 1 00:20:36.566 } 00:20:36.566 Got JSON-RPC error response 00:20:36.566 response: 00:20:36.566 { 00:20:36.566 "code": -114, 00:20:36.566 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:36.566 } 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.566 NVMe0n1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.566 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.846 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:36.846 16:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.223 { 00:20:38.223 "results": [ 00:20:38.223 { 00:20:38.223 "job": "NVMe0n1", 00:20:38.223 "core_mask": "0x1", 00:20:38.223 "workload": "write", 00:20:38.223 "status": "finished", 00:20:38.223 "queue_depth": 128, 00:20:38.223 "io_size": 4096, 00:20:38.223 "runtime": 1.002651, 00:20:38.223 "iops": 24701.516280340817, 00:20:38.223 "mibps": 96.49029797008131, 00:20:38.223 "io_failed": 0, 00:20:38.223 "io_timeout": 0, 00:20:38.223 "avg_latency_us": 5175.030173098997, 00:20:38.223 "min_latency_us": 4431.481904761905, 00:20:38.223 "max_latency_us": 11858.895238095238 00:20:38.223 } 00:20:38.223 ], 00:20:38.223 "core_count": 1 00:20:38.223 } 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2881645 ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881645' 00:20:38.223 killing process with pid 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2881645 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:38.223 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:38.223 [2024-11-04 16:32:02.805571] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:38.223 [2024-11-04 16:32:02.805632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881645 ] 00:20:38.223 [2024-11-04 16:32:02.870710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.223 [2024-11-04 16:32:02.911628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.223 [2024-11-04 16:32:03.502634] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 760ed5d4-966a-456b-83ea-8c412c269080 already exists 00:20:38.223 [2024-11-04 16:32:03.502661] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:760ed5d4-966a-456b-83ea-8c412c269080 alias for bdev NVMe1n1 00:20:38.223 [2024-11-04 16:32:03.502670] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:38.223 Running I/O for 1 seconds... 00:20:38.223 24639.00 IOPS, 96.25 MiB/s 00:20:38.223 Latency(us) 00:20:38.223 [2024-11-04T15:32:05.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.223 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:38.223 NVMe0n1 : 1.00 24701.52 96.49 0.00 0.00 5175.03 4431.48 11858.90 00:20:38.223 [2024-11-04T15:32:05.047Z] =================================================================================================================== 00:20:38.223 [2024-11-04T15:32:05.047Z] Total : 24701.52 96.49 0.00 0.00 5175.03 4431.48 11858.90 00:20:38.223 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.223 00:20:38.223 Latency(us) 00:20:38.223 [2024-11-04T15:32:05.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.223 [2024-11-04T15:32:05.047Z] =================================================================================================================== 00:20:38.223 [2024-11-04T15:32:05.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.223 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.223 rmmod nvme_tcp 00:20:38.223 rmmod nvme_fabrics 00:20:38.223 rmmod nvme_keyring 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2881505 ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2881505 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2881505 ']' 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2881505 00:20:38.223 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:38.224 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.224 16:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881505 00:20:38.224 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.224 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.224 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881505' 00:20:38.224 killing process with pid 2881505 00:20:38.224 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2881505 00:20:38.224 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2881505 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.483 16:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.018 00:20:41.018 real 0m10.522s 00:20:41.018 user 0m11.929s 00:20:41.018 sys 0m4.745s 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:41.018 ************************************ 00:20:41.018 END TEST nvmf_multicontroller 00:20:41.018 ************************************ 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.018 ************************************ 00:20:41.018 START TEST nvmf_aer 00:20:41.018 ************************************ 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:41.018 * Looking for test storage... 00:20:41.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.018 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.019 --rc genhtml_branch_coverage=1 00:20:41.019 --rc genhtml_function_coverage=1 00:20:41.019 --rc genhtml_legend=1 00:20:41.019 --rc geninfo_all_blocks=1 00:20:41.019 --rc geninfo_unexecuted_blocks=1 00:20:41.019 00:20:41.019 ' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.019 16:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.290 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:46.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:46.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:46.291 Found net devices under 0000:86:00.0: cvl_0_0 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:46.291 Found net devices under 0000:86:00.1: cvl_0_1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:20:46.291 00:20:46.291 --- 10.0.0.2 ping statistics --- 00:20:46.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.291 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:20:46.291 00:20:46.291 --- 10.0.0.1 ping statistics --- 00:20:46.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.291 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2885422 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2885422 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2885422 ']' 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.291 16:32:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.291 [2024-11-04 16:32:12.981775] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:46.291 [2024-11-04 16:32:12.981819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.291 [2024-11-04 16:32:13.045565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.291 [2024-11-04 16:32:13.089318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.291 [2024-11-04 16:32:13.089356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.291 [2024-11-04 16:32:13.089363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.291 [2024-11-04 16:32:13.089369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.291 [2024-11-04 16:32:13.089374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.291 [2024-11-04 16:32:13.093616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.291 [2024-11-04 16:32:13.093637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.291 [2024-11-04 16:32:13.097634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.291 [2024-11-04 16:32:13.097636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 [2024-11-04 16:32:13.246342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 Malloc0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 [2024-11-04 16:32:13.306036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.551 [ 00:20:46.551 { 00:20:46.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:46.551 "subtype": "Discovery", 00:20:46.551 "listen_addresses": [], 00:20:46.551 "allow_any_host": true, 00:20:46.551 "hosts": [] 00:20:46.551 }, 00:20:46.551 { 00:20:46.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.551 "subtype": "NVMe", 00:20:46.551 "listen_addresses": [ 00:20:46.551 { 00:20:46.551 "trtype": "TCP", 00:20:46.551 "adrfam": "IPv4", 00:20:46.551 "traddr": "10.0.0.2", 00:20:46.551 "trsvcid": "4420" 00:20:46.551 } 00:20:46.551 ], 00:20:46.551 "allow_any_host": true, 00:20:46.551 "hosts": [], 00:20:46.551 "serial_number": "SPDK00000000000001", 00:20:46.551 "model_number": "SPDK bdev Controller", 00:20:46.551 "max_namespaces": 2, 00:20:46.551 "min_cntlid": 1, 00:20:46.551 "max_cntlid": 65519, 00:20:46.551 "namespaces": [ 00:20:46.551 { 00:20:46.551 "nsid": 1, 00:20:46.551 "bdev_name": "Malloc0", 00:20:46.551 "name": "Malloc0", 00:20:46.551 "nguid": "A10A556F8B024FCC9EF83129FCD04976", 00:20:46.551 "uuid": "a10a556f-8b02-4fcc-9ef8-3129fcd04976" 00:20:46.551 } 00:20:46.551 ] 00:20:46.551 } 00:20:46.551 ] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2885455 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:20:46.551 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.811 Malloc1 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.811 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:46.811 [ 00:20:46.811 { 00:20:46.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:46.811 "subtype": "Discovery", 00:20:46.811 "listen_addresses": [], 00:20:46.811 "allow_any_host": true, 00:20:46.811 "hosts": [] 00:20:46.811 }, 00:20:46.811 { 00:20:46.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.811 "subtype": "NVMe", 00:20:46.811 "listen_addresses": [ 00:20:46.811 { 00:20:46.811 "trtype": "TCP", 00:20:46.811 "adrfam": "IPv4", 00:20:46.811 "traddr": "10.0.0.2", 00:20:46.811 "trsvcid": "4420" 00:20:46.811 } 00:20:46.811 ], 00:20:46.811 "allow_any_host": true, 00:20:46.811 "hosts": [], 00:20:46.811 "serial_number": "SPDK00000000000001", 00:20:46.811 "model_number": "SPDK bdev Controller", 00:20:46.811 "max_namespaces": 2, 00:20:46.811 "min_cntlid": 1, 00:20:46.812 "max_cntlid": 65519, 00:20:46.812 "namespaces": [ 00:20:46.812 { 00:20:46.812 "nsid": 1, 00:20:46.812 "bdev_name": "Malloc0", 00:20:46.812 "name": "Malloc0", 00:20:46.812 "nguid": "A10A556F8B024FCC9EF83129FCD04976", 00:20:46.812 "uuid": "a10a556f-8b02-4fcc-9ef8-3129fcd04976" 00:20:46.812 }, 00:20:46.812 { 00:20:46.812 "nsid": 2, 00:20:46.812 "bdev_name": "Malloc1", 00:20:46.812 "name": "Malloc1", 00:20:46.812 "nguid": "76C71FF5A09E47F8A404BB487881DA69", 00:20:46.812 Asynchronous Event Request test 00:20:46.812 Attaching to 10.0.0.2 00:20:46.812 Attached to 10.0.0.2 00:20:46.812 Registering asynchronous event callbacks... 00:20:46.812 Starting namespace attribute notice tests for all controllers... 00:20:46.812 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:46.812 aer_cb - Changed Namespace 00:20:46.812 Cleaning up... 00:20:46.812 "uuid": "76c71ff5-a09e-47f8-a404-bb487881da69" 00:20:46.812 } 00:20:46.812 ] 00:20:46.812 } 00:20:46.812 ] 00:20:46.812 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.812 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2885455 00:20:46.812 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:46.812 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.812 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.071 rmmod nvme_tcp 00:20:47.071 rmmod nvme_fabrics 00:20:47.071 rmmod nvme_keyring 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.071 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2885422 ']' 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2885422 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2885422 ']' 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2885422 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2885422 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2885422' 00:20:47.072 killing process with pid 2885422 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2885422 00:20:47.072 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2885422 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.331 16:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.236 00:20:49.236 real 0m8.645s 00:20:49.236 user 0m5.100s 00:20:49.236 sys 0m4.321s 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:49.236 ************************************ 00:20:49.236 END TEST nvmf_aer 00:20:49.236 ************************************ 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.236 16:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.496 ************************************ 00:20:49.496 START TEST nvmf_async_init 00:20:49.496 ************************************ 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:49.496 * Looking for test storage... 00:20:49.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.497 --rc genhtml_branch_coverage=1 00:20:49.497 --rc genhtml_function_coverage=1 00:20:49.497 --rc genhtml_legend=1 00:20:49.497 --rc geninfo_all_blocks=1 00:20:49.497 --rc geninfo_unexecuted_blocks=1 00:20:49.497 00:20:49.497 ' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.497 --rc genhtml_branch_coverage=1 00:20:49.497 --rc genhtml_function_coverage=1 00:20:49.497 --rc genhtml_legend=1 00:20:49.497 --rc geninfo_all_blocks=1 00:20:49.497 --rc geninfo_unexecuted_blocks=1 00:20:49.497 00:20:49.497 ' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:49.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.497 --rc genhtml_branch_coverage=1 00:20:49.497 --rc genhtml_function_coverage=1 00:20:49.497 --rc genhtml_legend=1 00:20:49.497 --rc geninfo_all_blocks=1 00:20:49.497 --rc geninfo_unexecuted_blocks=1 00:20:49.497 00:20:49.497 ' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2ec3afda089740569edf81ef1170aa70 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.497 16:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.768 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.769 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.769 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.769 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.769 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.769 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:20:55.028 00:20:55.028 --- 10.0.0.2 ping statistics --- 00:20:55.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.028 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:20:55.028 00:20:55.028 --- 10.0.0.1 ping statistics --- 00:20:55.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.028 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2888981 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2888981 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2888981 ']' 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.028 16:32:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 [2024-11-04 16:32:21.853163] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:20:55.287 [2024-11-04 16:32:21.853206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.287 [2024-11-04 16:32:21.916728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.287 [2024-11-04 16:32:21.957395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.287 [2024-11-04 16:32:21.957433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.287 [2024-11-04 16:32:21.957440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.287 [2024-11-04 16:32:21.957446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.287 [2024-11-04 16:32:21.957451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.287 [2024-11-04 16:32:21.958021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 [2024-11-04 16:32:22.087978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 null0 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.287 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2ec3afda089740569edf81ef1170aa70 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.546 [2024-11-04 16:32:22.128233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.546 nvme0n1 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.546 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.546 [ 00:20:55.546 { 00:20:55.546 "name": "nvme0n1", 00:20:55.546 "aliases": [ 00:20:55.546 "2ec3afda-0897-4056-9edf-81ef1170aa70" 00:20:55.546 ], 00:20:55.546 "product_name": "NVMe disk", 00:20:55.546 "block_size": 512, 00:20:55.546 "num_blocks": 2097152, 00:20:55.546 "uuid": "2ec3afda-0897-4056-9edf-81ef1170aa70", 00:20:55.546 "numa_id": 1, 00:20:55.546 "assigned_rate_limits": { 00:20:55.805 "rw_ios_per_sec": 0, 00:20:55.805 "rw_mbytes_per_sec": 0, 00:20:55.805 "r_mbytes_per_sec": 0, 00:20:55.805 "w_mbytes_per_sec": 0 00:20:55.805 }, 00:20:55.805 "claimed": false, 00:20:55.805 "zoned": false, 00:20:55.805 "supported_io_types": { 00:20:55.805 "read": true, 00:20:55.805 "write": true, 00:20:55.805 "unmap": false, 00:20:55.805 "flush": true, 00:20:55.805 "reset": true, 00:20:55.805 "nvme_admin": true, 00:20:55.805 "nvme_io": true, 00:20:55.805 "nvme_io_md": false, 00:20:55.805 "write_zeroes": true, 00:20:55.805 "zcopy": false, 00:20:55.805 "get_zone_info": false, 00:20:55.805 "zone_management": false, 00:20:55.805 "zone_append": false, 00:20:55.805 "compare": true, 00:20:55.805 "compare_and_write": true, 00:20:55.805 "abort": true, 00:20:55.805 "seek_hole": false, 00:20:55.805 "seek_data": false, 00:20:55.805 "copy": true, 00:20:55.805 "nvme_iov_md": false 00:20:55.805 }, 00:20:55.805 "memory_domains": [ 00:20:55.805 { 00:20:55.805 "dma_device_id": "system", 00:20:55.805 "dma_device_type": 1 00:20:55.805 } 00:20:55.805 ], 00:20:55.805 "driver_specific": { 00:20:55.805 "nvme": [ 00:20:55.805 { 00:20:55.805 "trid": { 00:20:55.805 "trtype": "TCP", 00:20:55.805 "adrfam": "IPv4", 00:20:55.805 "traddr": "10.0.0.2", 00:20:55.805 "trsvcid": "4420", 00:20:55.805 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:55.805 }, 00:20:55.805 "ctrlr_data": { 00:20:55.805 "cntlid": 1, 00:20:55.805 "vendor_id": "0x8086", 00:20:55.805 "model_number": "SPDK bdev Controller", 00:20:55.805 "serial_number": "00000000000000000000", 00:20:55.805 "firmware_revision": "25.01", 00:20:55.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.805 "oacs": { 00:20:55.805 "security": 0, 00:20:55.805 "format": 0, 00:20:55.805 "firmware": 0, 00:20:55.805 "ns_manage": 0 00:20:55.805 }, 00:20:55.805 "multi_ctrlr": true, 00:20:55.805 "ana_reporting": false 00:20:55.805 }, 00:20:55.805 "vs": { 00:20:55.805 "nvme_version": "1.3" 00:20:55.805 }, 00:20:55.805 "ns_data": { 00:20:55.805 "id": 1, 00:20:55.805 "can_share": true 00:20:55.805 } 00:20:55.805 } 00:20:55.805 ], 00:20:55.805 "mp_policy": "active_passive" 00:20:55.805 } 00:20:55.805 } 00:20:55.805 ] 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.805 [2024-11-04 16:32:22.384723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:55.805 [2024-11-04 16:32:22.384778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec1fa0 (9): Bad file descriptor 00:20:55.805 [2024-11-04 16:32:22.517701] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.805 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.805 [ 00:20:55.805 { 00:20:55.805 "name": "nvme0n1", 00:20:55.805 "aliases": [ 00:20:55.805 "2ec3afda-0897-4056-9edf-81ef1170aa70" 00:20:55.805 ], 00:20:55.805 "product_name": "NVMe disk", 00:20:55.805 "block_size": 512, 00:20:55.806 "num_blocks": 2097152, 00:20:55.806 "uuid": "2ec3afda-0897-4056-9edf-81ef1170aa70", 00:20:55.806 "numa_id": 1, 00:20:55.806 "assigned_rate_limits": { 00:20:55.806 "rw_ios_per_sec": 0, 00:20:55.806 "rw_mbytes_per_sec": 0, 00:20:55.806 "r_mbytes_per_sec": 0, 00:20:55.806 "w_mbytes_per_sec": 0 00:20:55.806 }, 00:20:55.806 "claimed": false, 00:20:55.806 "zoned": false, 00:20:55.806 "supported_io_types": { 00:20:55.806 "read": true, 00:20:55.806 "write": true, 00:20:55.806 "unmap": false, 00:20:55.806 "flush": true, 00:20:55.806 "reset": true, 00:20:55.806 "nvme_admin": true, 00:20:55.806 "nvme_io": true, 00:20:55.806 "nvme_io_md": false, 00:20:55.806 "write_zeroes": true, 00:20:55.806 "zcopy": false, 00:20:55.806 "get_zone_info": false, 00:20:55.806 "zone_management": false, 00:20:55.806 "zone_append": false, 00:20:55.806 "compare": true, 00:20:55.806 "compare_and_write": true, 00:20:55.806 "abort": true, 00:20:55.806 "seek_hole": false, 00:20:55.806 "seek_data": false, 00:20:55.806 "copy": true, 00:20:55.806 "nvme_iov_md": false 00:20:55.806 }, 00:20:55.806 "memory_domains": [ 00:20:55.806 { 00:20:55.806 "dma_device_id": "system", 00:20:55.806 "dma_device_type": 1 00:20:55.806 } 00:20:55.806 ], 00:20:55.806 "driver_specific": { 00:20:55.806 "nvme": [ 00:20:55.806 { 00:20:55.806 "trid": { 00:20:55.806 "trtype": "TCP", 00:20:55.806 "adrfam": "IPv4", 00:20:55.806 "traddr": "10.0.0.2", 00:20:55.806 "trsvcid": "4420", 00:20:55.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:55.806 }, 00:20:55.806 "ctrlr_data": { 00:20:55.806 "cntlid": 2, 00:20:55.806 "vendor_id": "0x8086", 00:20:55.806 "model_number": "SPDK bdev Controller", 00:20:55.806 "serial_number": "00000000000000000000", 00:20:55.806 "firmware_revision": "25.01", 00:20:55.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:55.806 "oacs": { 00:20:55.806 "security": 0, 00:20:55.806 "format": 0, 00:20:55.806 "firmware": 0, 00:20:55.806 "ns_manage": 0 00:20:55.806 }, 00:20:55.806 "multi_ctrlr": true, 00:20:55.806 "ana_reporting": false 00:20:55.806 }, 00:20:55.806 "vs": { 00:20:55.806 "nvme_version": "1.3" 00:20:55.806 }, 00:20:55.806 "ns_data": { 00:20:55.806 "id": 1, 00:20:55.806 "can_share": true 00:20:55.806 } 00:20:55.806 } 00:20:55.806 ], 00:20:55.806 "mp_policy": "active_passive" 00:20:55.806 } 00:20:55.806 } 00:20:55.806 ] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZOdQBSxMMF 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZOdQBSxMMF 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ZOdQBSxMMF 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 [2024-11-04 16:32:22.589335] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.806 [2024-11-04 16:32:22.589428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.806 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:55.806 [2024-11-04 16:32:22.605390] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.065 nvme0n1 00:20:56.065 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.065 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:56.065 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.065 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:56.065 [ 00:20:56.065 { 00:20:56.065 "name": "nvme0n1", 00:20:56.065 "aliases": [ 00:20:56.065 "2ec3afda-0897-4056-9edf-81ef1170aa70" 00:20:56.065 ], 00:20:56.065 "product_name": "NVMe disk", 00:20:56.065 "block_size": 512, 00:20:56.065 "num_blocks": 2097152, 00:20:56.065 "uuid": "2ec3afda-0897-4056-9edf-81ef1170aa70", 00:20:56.065 "numa_id": 1, 00:20:56.065 "assigned_rate_limits": { 00:20:56.065 "rw_ios_per_sec": 0, 00:20:56.065 "rw_mbytes_per_sec": 0, 00:20:56.065 "r_mbytes_per_sec": 0, 00:20:56.065 "w_mbytes_per_sec": 0 00:20:56.065 }, 00:20:56.065 "claimed": false, 00:20:56.065 "zoned": false, 00:20:56.065 "supported_io_types": { 00:20:56.065 "read": true, 00:20:56.065 "write": true, 00:20:56.065 "unmap": false, 00:20:56.065 "flush": true, 00:20:56.065 "reset": true, 00:20:56.065 "nvme_admin": true, 00:20:56.065 "nvme_io": true, 00:20:56.065 "nvme_io_md": false, 00:20:56.065 "write_zeroes": true, 00:20:56.065 "zcopy": false, 00:20:56.065 "get_zone_info": false, 00:20:56.065 "zone_management": false, 00:20:56.065 "zone_append": false, 00:20:56.065 "compare": true, 00:20:56.065 "compare_and_write": true, 00:20:56.066 "abort": true, 00:20:56.066 "seek_hole": false, 00:20:56.066 "seek_data": false, 00:20:56.066 "copy": true, 00:20:56.066 "nvme_iov_md": false 00:20:56.066 }, 00:20:56.066 "memory_domains": [ 00:20:56.066 { 00:20:56.066 "dma_device_id": "system", 00:20:56.066 "dma_device_type": 1 00:20:56.066 } 00:20:56.066 ], 00:20:56.066 "driver_specific": { 00:20:56.066 "nvme": [ 00:20:56.066 { 00:20:56.066 "trid": { 00:20:56.066 "trtype": "TCP", 00:20:56.066 "adrfam": "IPv4", 00:20:56.066 "traddr": "10.0.0.2", 00:20:56.066 "trsvcid": "4421", 00:20:56.066 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:56.066 }, 00:20:56.066 "ctrlr_data": { 00:20:56.066 "cntlid": 3, 00:20:56.066 "vendor_id": "0x8086", 00:20:56.066 "model_number": "SPDK bdev Controller", 00:20:56.066 "serial_number": "00000000000000000000", 00:20:56.066 "firmware_revision": "25.01", 00:20:56.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:56.066 "oacs": { 00:20:56.066 "security": 0, 00:20:56.066 "format": 0, 00:20:56.066 "firmware": 0, 00:20:56.066 "ns_manage": 0 00:20:56.066 }, 00:20:56.066 "multi_ctrlr": true, 00:20:56.066 "ana_reporting": false 00:20:56.066 }, 00:20:56.066 "vs": { 00:20:56.066 "nvme_version": "1.3" 00:20:56.066 }, 00:20:56.066 "ns_data": { 00:20:56.066 "id": 1, 00:20:56.066 "can_share": true 00:20:56.066 } 00:20:56.066 } 00:20:56.066 ], 00:20:56.066 "mp_policy": "active_passive" 00:20:56.066 } 00:20:56.066 } 00:20:56.066 ] 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ZOdQBSxMMF 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.066 rmmod nvme_tcp 00:20:56.066 rmmod nvme_fabrics 00:20:56.066 rmmod nvme_keyring 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2888981 ']' 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2888981 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2888981 ']' 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2888981 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888981 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888981' 00:20:56.066 killing process with pid 2888981 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2888981 00:20:56.066 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2888981 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.325 16:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.228 16:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.228 00:20:58.228 real 0m8.958s 00:20:58.228 user 0m3.004s 00:20:58.228 sys 0m4.355s 00:20:58.228 16:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.228 16:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.228 ************************************ 00:20:58.228 END TEST nvmf_async_init 00:20:58.228 ************************************ 00:20:58.487 16:32:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 ************************************ 00:20:58.488 START TEST dma 00:20:58.488 ************************************ 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:58.488 * Looking for test storage... 00:20:58.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.488 --rc genhtml_branch_coverage=1 00:20:58.488 --rc genhtml_function_coverage=1 00:20:58.488 --rc genhtml_legend=1 00:20:58.488 --rc geninfo_all_blocks=1 00:20:58.488 --rc geninfo_unexecuted_blocks=1 00:20:58.488 00:20:58.488 ' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.488 --rc genhtml_branch_coverage=1 00:20:58.488 --rc genhtml_function_coverage=1 00:20:58.488 --rc genhtml_legend=1 00:20:58.488 --rc geninfo_all_blocks=1 00:20:58.488 --rc geninfo_unexecuted_blocks=1 00:20:58.488 00:20:58.488 ' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.488 --rc genhtml_branch_coverage=1 00:20:58.488 --rc genhtml_function_coverage=1 00:20:58.488 --rc genhtml_legend=1 00:20:58.488 --rc geninfo_all_blocks=1 00:20:58.488 --rc geninfo_unexecuted_blocks=1 00:20:58.488 00:20:58.488 ' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.488 --rc genhtml_branch_coverage=1 00:20:58.488 --rc genhtml_function_coverage=1 00:20:58.488 --rc genhtml_legend=1 00:20:58.488 --rc geninfo_all_blocks=1 00:20:58.488 --rc geninfo_unexecuted_blocks=1 00:20:58.488 00:20:58.488 ' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:58.488 00:20:58.488 real 0m0.203s 00:20:58.488 user 0m0.126s 00:20:58.488 sys 0m0.091s 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.488 16:32:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 ************************************ 00:20:58.488 END TEST dma 00:20:58.488 ************************************ 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.747 ************************************ 00:20:58.747 START TEST nvmf_identify 00:20:58.747 ************************************ 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:58.747 * Looking for test storage... 00:20:58.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.747 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.748 --rc genhtml_branch_coverage=1 00:20:58.748 --rc genhtml_function_coverage=1 00:20:58.748 --rc genhtml_legend=1 00:20:58.748 --rc geninfo_all_blocks=1 00:20:58.748 --rc geninfo_unexecuted_blocks=1 00:20:58.748 00:20:58.748 ' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.748 --rc genhtml_branch_coverage=1 00:20:58.748 --rc genhtml_function_coverage=1 00:20:58.748 --rc genhtml_legend=1 00:20:58.748 --rc geninfo_all_blocks=1 00:20:58.748 --rc geninfo_unexecuted_blocks=1 00:20:58.748 00:20:58.748 ' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.748 --rc genhtml_branch_coverage=1 00:20:58.748 --rc genhtml_function_coverage=1 00:20:58.748 --rc genhtml_legend=1 00:20:58.748 --rc geninfo_all_blocks=1 00:20:58.748 --rc geninfo_unexecuted_blocks=1 00:20:58.748 00:20:58.748 ' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.748 --rc genhtml_branch_coverage=1 00:20:58.748 --rc genhtml_function_coverage=1 00:20:58.748 --rc genhtml_legend=1 00:20:58.748 --rc geninfo_all_blocks=1 00:20:58.748 --rc geninfo_unexecuted_blocks=1 00:20:58.748 00:20:58.748 ' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.748 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:59.007 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:59.008 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.008 16:32:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.279 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.279 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.279 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:21:04.280 00:21:04.280 --- 10.0.0.2 ping statistics --- 00:21:04.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.280 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:04.280 00:21:04.280 --- 10.0.0.1 ping statistics --- 00:21:04.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.280 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2892675 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2892675 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2892675 ']' 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 [2024-11-04 16:32:30.703813] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:04.280 [2024-11-04 16:32:30.703859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.280 [2024-11-04 16:32:30.771654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.280 [2024-11-04 16:32:30.815527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.280 [2024-11-04 16:32:30.815566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.280 [2024-11-04 16:32:30.815573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.280 [2024-11-04 16:32:30.815580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.280 [2024-11-04 16:32:30.815585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.280 [2024-11-04 16:32:30.817062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.280 [2024-11-04 16:32:30.817157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.280 [2024-11-04 16:32:30.817247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.280 [2024-11-04 16:32:30.817248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 [2024-11-04 16:32:30.913896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 Malloc0 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 [2024-11-04 16:32:31.011000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.280 [ 00:21:04.280 { 00:21:04.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:04.280 "subtype": "Discovery", 00:21:04.280 "listen_addresses": [ 00:21:04.280 { 00:21:04.280 "trtype": "TCP", 00:21:04.280 "adrfam": "IPv4", 00:21:04.280 "traddr": "10.0.0.2", 00:21:04.280 "trsvcid": "4420" 00:21:04.280 } 00:21:04.280 ], 00:21:04.280 "allow_any_host": true, 00:21:04.280 "hosts": [] 00:21:04.280 }, 00:21:04.280 { 00:21:04.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.280 "subtype": "NVMe", 00:21:04.280 "listen_addresses": [ 00:21:04.280 { 00:21:04.280 "trtype": "TCP", 00:21:04.280 "adrfam": "IPv4", 00:21:04.280 "traddr": "10.0.0.2", 00:21:04.280 "trsvcid": "4420" 00:21:04.280 } 00:21:04.280 ], 00:21:04.280 "allow_any_host": true, 00:21:04.280 "hosts": [], 00:21:04.280 "serial_number": "SPDK00000000000001", 00:21:04.280 "model_number": "SPDK bdev Controller", 00:21:04.280 "max_namespaces": 32, 00:21:04.280 "min_cntlid": 1, 00:21:04.280 "max_cntlid": 65519, 00:21:04.280 "namespaces": [ 00:21:04.280 { 00:21:04.280 "nsid": 1, 00:21:04.280 "bdev_name": "Malloc0", 00:21:04.280 "name": "Malloc0", 00:21:04.280 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:04.280 "eui64": "ABCDEF0123456789", 00:21:04.280 "uuid": "9141657a-9a8a-478a-9dbf-94252550657d" 00:21:04.280 } 00:21:04.280 ] 00:21:04.280 } 00:21:04.280 ] 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.280 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:04.280 [2024-11-04 16:32:31.061726] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:04.281 [2024-11-04 16:32:31.061759] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892812 ] 00:21:04.281 [2024-11-04 16:32:31.101095] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:04.281 [2024-11-04 16:32:31.101140] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:04.281 [2024-11-04 16:32:31.101146] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:04.281 [2024-11-04 16:32:31.101156] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:04.281 [2024-11-04 16:32:31.101164] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:04.543 [2024-11-04 16:32:31.104874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:04.543 [2024-11-04 16:32:31.104909] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc9f690 0 00:21:04.543 [2024-11-04 16:32:31.111628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:04.543 [2024-11-04 16:32:31.111644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:04.543 [2024-11-04 16:32:31.111649] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:04.543 [2024-11-04 16:32:31.111652] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:04.543 [2024-11-04 16:32:31.111684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.111690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.111694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.111707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:04.543 [2024-11-04 16:32:31.111722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.118609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.118617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.118620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.118635] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:04.543 [2024-11-04 16:32:31.118641] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:04.543 [2024-11-04 16:32:31.118646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:04.543 [2024-11-04 16:32:31.118660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.118673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.118688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.118846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.118851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.118854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.118863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:04.543 [2024-11-04 16:32:31.118869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:04.543 [2024-11-04 16:32:31.118876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.118888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.118897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.118965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.118971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.118974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.118982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:04.543 [2024-11-04 16:32:31.118989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.118995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.118998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.119007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.119016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.119081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.119086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.119089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.119097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.119105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.119117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.119126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.119191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.119198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.119201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.119209] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:04.543 [2024-11-04 16:32:31.119213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.119221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.119329] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:04.543 [2024-11-04 16:32:31.119333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.119342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.119353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.119363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.119430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.119436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.119439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.119446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:04.543 [2024-11-04 16:32:31.119454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.543 [2024-11-04 16:32:31.119466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.543 [2024-11-04 16:32:31.119475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.543 [2024-11-04 16:32:31.119546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.543 [2024-11-04 16:32:31.119552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.543 [2024-11-04 16:32:31.119555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.543 [2024-11-04 16:32:31.119558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.543 [2024-11-04 16:32:31.119562] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:04.543 [2024-11-04 16:32:31.119566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:04.543 [2024-11-04 16:32:31.119573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:04.544 [2024-11-04 16:32:31.119582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:04.544 [2024-11-04 16:32:31.119591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.119595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.119604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-04 16:32:31.119614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.544 [2024-11-04 16:32:31.119714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.544 [2024-11-04 16:32:31.119720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.544 [2024-11-04 16:32:31.119723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.119726] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9f690): datao=0, datal=4096, cccid=0 00:21:04.544 [2024-11-04 16:32:31.119731] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd01100) on tqpair(0xc9f690): expected_datao=0, payload_size=4096 00:21:04.544 [2024-11-04 16:32:31.119735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.119749] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.119754] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.544 [2024-11-04 16:32:31.164618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.544 [2024-11-04 16:32:31.164621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.544 [2024-11-04 16:32:31.164632] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:04.544 [2024-11-04 16:32:31.164637] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:04.544 [2024-11-04 16:32:31.164641] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:04.544 [2024-11-04 16:32:31.164646] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:04.544 [2024-11-04 16:32:31.164654] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:04.544 [2024-11-04 16:32:31.164658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:04.544 [2024-11-04 16:32:31.164667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:04.544 [2024-11-04 16:32:31.164673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.544 [2024-11-04 16:32:31.164699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.544 [2024-11-04 16:32:31.164878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.544 [2024-11-04 16:32:31.164884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.544 [2024-11-04 16:32:31.164887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.544 [2024-11-04 16:32:31.164900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.544 [2024-11-04 16:32:31.164919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.544 [2024-11-04 16:32:31.164935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.544 [2024-11-04 16:32:31.164951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.544 [2024-11-04 16:32:31.164966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:04.544 [2024-11-04 16:32:31.164974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:04.544 [2024-11-04 16:32:31.164980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.164983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.164989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-04 16:32:31.165000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01100, cid 0, qid 0 00:21:04.544 [2024-11-04 16:32:31.165005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01280, cid 1, qid 0 00:21:04.544 [2024-11-04 16:32:31.165009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01400, cid 2, qid 0 00:21:04.544 [2024-11-04 16:32:31.165013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.544 [2024-11-04 16:32:31.165016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01700, cid 4, qid 0 00:21:04.544 [2024-11-04 16:32:31.165116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.544 [2024-11-04 16:32:31.165122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.544 [2024-11-04 16:32:31.165125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01700) on tqpair=0xc9f690 00:21:04.544 [2024-11-04 16:32:31.165135] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:04.544 [2024-11-04 16:32:31.165140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:04.544 [2024-11-04 16:32:31.165150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.165159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-04 16:32:31.165171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01700, cid 4, qid 0 00:21:04.544 [2024-11-04 16:32:31.165291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.544 [2024-11-04 16:32:31.165296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.544 [2024-11-04 16:32:31.165299] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9f690): datao=0, datal=4096, cccid=4 00:21:04.544 [2024-11-04 16:32:31.165306] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd01700) on tqpair(0xc9f690): expected_datao=0, payload_size=4096 00:21:04.544 [2024-11-04 16:32:31.165310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165319] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.544 [2024-11-04 16:32:31.165334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.544 [2024-11-04 16:32:31.165337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01700) on tqpair=0xc9f690 00:21:04.544 [2024-11-04 16:32:31.165350] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:04.544 [2024-11-04 16:32:31.165372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.165381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.544 [2024-11-04 16:32:31.165387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc9f690) 00:21:04.544 [2024-11-04 16:32:31.165398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.544 [2024-11-04 16:32:31.165411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01700, cid 4, qid 0 00:21:04.544 [2024-11-04 16:32:31.165415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01880, cid 5, qid 0 00:21:04.544 [2024-11-04 16:32:31.165537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.544 [2024-11-04 16:32:31.165542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.544 [2024-11-04 16:32:31.165545] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9f690): datao=0, datal=1024, cccid=4 00:21:04.544 [2024-11-04 16:32:31.165552] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd01700) on tqpair(0xc9f690): expected_datao=0, payload_size=1024 00:21:04.544 [2024-11-04 16:32:31.165556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.544 [2024-11-04 16:32:31.165569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.544 [2024-11-04 16:32:31.165573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.544 [2024-11-04 16:32:31.165576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.165579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01880) on tqpair=0xc9f690 00:21:04.545 [2024-11-04 16:32:31.208607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.545 [2024-11-04 16:32:31.208621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.545 [2024-11-04 16:32:31.208624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.208628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01700) on tqpair=0xc9f690 00:21:04.545 [2024-11-04 16:32:31.208639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.208643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9f690) 00:21:04.545 [2024-11-04 16:32:31.208649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-04 16:32:31.208666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01700, cid 4, qid 0 00:21:04.545 [2024-11-04 16:32:31.208830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.545 [2024-11-04 16:32:31.208835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.545 [2024-11-04 16:32:31.208838] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.208841] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9f690): datao=0, datal=3072, cccid=4 00:21:04.545 [2024-11-04 16:32:31.208846] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd01700) on tqpair(0xc9f690): expected_datao=0, payload_size=3072 00:21:04.545 [2024-11-04 16:32:31.208849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.208869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.208873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.545 [2024-11-04 16:32:31.250756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.545 [2024-11-04 16:32:31.250759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01700) on tqpair=0xc9f690 00:21:04.545 [2024-11-04 16:32:31.250771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9f690) 00:21:04.545 [2024-11-04 16:32:31.250781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.545 [2024-11-04 16:32:31.250795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01700, cid 4, qid 0 00:21:04.545 [2024-11-04 16:32:31.250915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.545 [2024-11-04 16:32:31.250920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.545 [2024-11-04 16:32:31.250923] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9f690): datao=0, datal=8, cccid=4 00:21:04.545 [2024-11-04 16:32:31.250930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd01700) on tqpair(0xc9f690): expected_datao=0, payload_size=8 00:21:04.545 [2024-11-04 16:32:31.250933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250939] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.250942] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.295611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.545 [2024-11-04 16:32:31.295621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.545 [2024-11-04 16:32:31.295624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.545 [2024-11-04 16:32:31.295627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01700) on tqpair=0xc9f690 00:21:04.545 ===================================================== 00:21:04.545 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:04.545 ===================================================== 00:21:04.545 Controller Capabilities/Features 00:21:04.545 ================================ 00:21:04.545 Vendor ID: 0000 00:21:04.545 Subsystem Vendor ID: 0000 00:21:04.545 Serial Number: .................... 00:21:04.545 Model Number: ........................................ 00:21:04.545 Firmware Version: 25.01 00:21:04.545 Recommended Arb Burst: 0 00:21:04.545 IEEE OUI Identifier: 00 00 00 00:21:04.545 Multi-path I/O 00:21:04.545 May have multiple subsystem ports: No 00:21:04.545 May have multiple controllers: No 00:21:04.545 Associated with SR-IOV VF: No 00:21:04.545 Max Data Transfer Size: 131072 00:21:04.545 Max Number of Namespaces: 0 00:21:04.545 Max Number of I/O Queues: 1024 00:21:04.545 NVMe Specification Version (VS): 1.3 00:21:04.545 NVMe Specification Version (Identify): 1.3 00:21:04.545 Maximum Queue Entries: 128 00:21:04.545 Contiguous Queues Required: Yes 00:21:04.545 Arbitration Mechanisms Supported 00:21:04.545 Weighted Round Robin: Not Supported 00:21:04.545 Vendor Specific: Not Supported 00:21:04.545 Reset Timeout: 15000 ms 00:21:04.545 Doorbell Stride: 4 bytes 00:21:04.545 NVM Subsystem Reset: Not Supported 00:21:04.545 Command Sets Supported 00:21:04.545 NVM Command Set: Supported 00:21:04.545 Boot Partition: Not Supported 00:21:04.545 Memory Page Size Minimum: 4096 bytes 00:21:04.545 Memory Page Size Maximum: 4096 bytes 00:21:04.545 Persistent Memory Region: Not Supported 00:21:04.545 Optional Asynchronous Events Supported 00:21:04.545 Namespace Attribute Notices: Not Supported 00:21:04.545 Firmware Activation Notices: Not Supported 00:21:04.545 ANA Change Notices: Not Supported 00:21:04.545 PLE Aggregate Log Change Notices: Not Supported 00:21:04.545 LBA Status Info Alert Notices: Not Supported 00:21:04.545 EGE Aggregate Log Change Notices: Not Supported 00:21:04.545 Normal NVM Subsystem Shutdown event: Not Supported 00:21:04.545 Zone Descriptor Change Notices: Not Supported 00:21:04.545 Discovery Log Change Notices: Supported 00:21:04.545 Controller Attributes 00:21:04.545 128-bit Host Identifier: Not Supported 00:21:04.545 Non-Operational Permissive Mode: Not Supported 00:21:04.545 NVM Sets: Not Supported 00:21:04.545 Read Recovery Levels: Not Supported 00:21:04.545 Endurance Groups: Not Supported 00:21:04.545 Predictable Latency Mode: Not Supported 00:21:04.545 Traffic Based Keep ALive: Not Supported 00:21:04.545 Namespace Granularity: Not Supported 00:21:04.545 SQ Associations: Not Supported 00:21:04.545 UUID List: Not Supported 00:21:04.545 Multi-Domain Subsystem: Not Supported 00:21:04.545 Fixed Capacity Management: Not Supported 00:21:04.545 Variable Capacity Management: Not Supported 00:21:04.545 Delete Endurance Group: Not Supported 00:21:04.545 Delete NVM Set: Not Supported 00:21:04.545 Extended LBA Formats Supported: Not Supported 00:21:04.545 Flexible Data Placement Supported: Not Supported 00:21:04.545 00:21:04.545 Controller Memory Buffer Support 00:21:04.545 ================================ 00:21:04.545 Supported: No 00:21:04.545 00:21:04.545 Persistent Memory Region Support 00:21:04.545 ================================ 00:21:04.545 Supported: No 00:21:04.545 00:21:04.545 Admin Command Set Attributes 00:21:04.545 ============================ 00:21:04.545 Security Send/Receive: Not Supported 00:21:04.545 Format NVM: Not Supported 00:21:04.545 Firmware Activate/Download: Not Supported 00:21:04.545 Namespace Management: Not Supported 00:21:04.545 Device Self-Test: Not Supported 00:21:04.545 Directives: Not Supported 00:21:04.545 NVMe-MI: Not Supported 00:21:04.545 Virtualization Management: Not Supported 00:21:04.545 Doorbell Buffer Config: Not Supported 00:21:04.545 Get LBA Status Capability: Not Supported 00:21:04.545 Command & Feature Lockdown Capability: Not Supported 00:21:04.545 Abort Command Limit: 1 00:21:04.545 Async Event Request Limit: 4 00:21:04.545 Number of Firmware Slots: N/A 00:21:04.545 Firmware Slot 1 Read-Only: N/A 00:21:04.545 Firmware Activation Without Reset: N/A 00:21:04.545 Multiple Update Detection Support: N/A 00:21:04.545 Firmware Update Granularity: No Information Provided 00:21:04.545 Per-Namespace SMART Log: No 00:21:04.545 Asymmetric Namespace Access Log Page: Not Supported 00:21:04.545 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:04.545 Command Effects Log Page: Not Supported 00:21:04.545 Get Log Page Extended Data: Supported 00:21:04.545 Telemetry Log Pages: Not Supported 00:21:04.545 Persistent Event Log Pages: Not Supported 00:21:04.545 Supported Log Pages Log Page: May Support 00:21:04.545 Commands Supported & Effects Log Page: Not Supported 00:21:04.545 Feature Identifiers & Effects Log Page:May Support 00:21:04.545 NVMe-MI Commands & Effects Log Page: May Support 00:21:04.545 Data Area 4 for Telemetry Log: Not Supported 00:21:04.545 Error Log Page Entries Supported: 128 00:21:04.545 Keep Alive: Not Supported 00:21:04.545 00:21:04.545 NVM Command Set Attributes 00:21:04.545 ========================== 00:21:04.545 Submission Queue Entry Size 00:21:04.545 Max: 1 00:21:04.545 Min: 1 00:21:04.545 Completion Queue Entry Size 00:21:04.545 Max: 1 00:21:04.545 Min: 1 00:21:04.545 Number of Namespaces: 0 00:21:04.545 Compare Command: Not Supported 00:21:04.545 Write Uncorrectable Command: Not Supported 00:21:04.545 Dataset Management Command: Not Supported 00:21:04.545 Write Zeroes Command: Not Supported 00:21:04.545 Set Features Save Field: Not Supported 00:21:04.545 Reservations: Not Supported 00:21:04.545 Timestamp: Not Supported 00:21:04.545 Copy: Not Supported 00:21:04.545 Volatile Write Cache: Not Present 00:21:04.545 Atomic Write Unit (Normal): 1 00:21:04.546 Atomic Write Unit (PFail): 1 00:21:04.546 Atomic Compare & Write Unit: 1 00:21:04.546 Fused Compare & Write: Supported 00:21:04.546 Scatter-Gather List 00:21:04.546 SGL Command Set: Supported 00:21:04.546 SGL Keyed: Supported 00:21:04.546 SGL Bit Bucket Descriptor: Not Supported 00:21:04.546 SGL Metadata Pointer: Not Supported 00:21:04.546 Oversized SGL: Not Supported 00:21:04.546 SGL Metadata Address: Not Supported 00:21:04.546 SGL Offset: Supported 00:21:04.546 Transport SGL Data Block: Not Supported 00:21:04.546 Replay Protected Memory Block: Not Supported 00:21:04.546 00:21:04.546 Firmware Slot Information 00:21:04.546 ========================= 00:21:04.546 Active slot: 0 00:21:04.546 00:21:04.546 00:21:04.546 Error Log 00:21:04.546 ========= 00:21:04.546 00:21:04.546 Active Namespaces 00:21:04.546 ================= 00:21:04.546 Discovery Log Page 00:21:04.546 ================== 00:21:04.546 Generation Counter: 2 00:21:04.546 Number of Records: 2 00:21:04.546 Record Format: 0 00:21:04.546 00:21:04.546 Discovery Log Entry 0 00:21:04.546 ---------------------- 00:21:04.546 Transport Type: 3 (TCP) 00:21:04.546 Address Family: 1 (IPv4) 00:21:04.546 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:04.546 Entry Flags: 00:21:04.546 Duplicate Returned Information: 1 00:21:04.546 Explicit Persistent Connection Support for Discovery: 1 00:21:04.546 Transport Requirements: 00:21:04.546 Secure Channel: Not Required 00:21:04.546 Port ID: 0 (0x0000) 00:21:04.546 Controller ID: 65535 (0xffff) 00:21:04.546 Admin Max SQ Size: 128 00:21:04.546 Transport Service Identifier: 4420 00:21:04.546 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:04.546 Transport Address: 10.0.0.2 00:21:04.546 Discovery Log Entry 1 00:21:04.546 ---------------------- 00:21:04.546 Transport Type: 3 (TCP) 00:21:04.546 Address Family: 1 (IPv4) 00:21:04.546 Subsystem Type: 2 (NVM Subsystem) 00:21:04.546 Entry Flags: 00:21:04.546 Duplicate Returned Information: 0 00:21:04.546 Explicit Persistent Connection Support for Discovery: 0 00:21:04.546 Transport Requirements: 00:21:04.546 Secure Channel: Not Required 00:21:04.546 Port ID: 0 (0x0000) 00:21:04.546 Controller ID: 65535 (0xffff) 00:21:04.546 Admin Max SQ Size: 128 00:21:04.546 Transport Service Identifier: 4420 00:21:04.546 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:04.546 Transport Address: 10.0.0.2 [2024-11-04 16:32:31.295711] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:04.546 [2024-11-04 16:32:31.295723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01100) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.295730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-04 16:32:31.295734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01280) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.295738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-04 16:32:31.295742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01400) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.295746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-04 16:32:31.295751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.295755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.546 [2024-11-04 16:32:31.295763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.295766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.295769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.295776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.295789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.295858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.295863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.295867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.295870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.295878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.295881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.295885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.295890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.295903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.296021] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:04.546 [2024-11-04 16:32:31.296026] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:04.546 [2024-11-04 16:32:31.296033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.296045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.296055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.296179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.296191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.296200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.296282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.296294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.296304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.296431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.296443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.296451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.546 [2024-11-04 16:32:31.296578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.546 [2024-11-04 16:32:31.296584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.546 [2024-11-04 16:32:31.296590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.546 [2024-11-04 16:32:31.296598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.546 [2024-11-04 16:32:31.296659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.546 [2024-11-04 16:32:31.296665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.546 [2024-11-04 16:32:31.296668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.296681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.296693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.296702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.296768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.296773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.296776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.296787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.296799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.296808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.296912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.296917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.296920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.296931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.296937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.296943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.296952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.547 [2024-11-04 16:32:31.297847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.547 [2024-11-04 16:32:31.297854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.547 [2024-11-04 16:32:31.297859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.547 [2024-11-04 16:32:31.297868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.547 [2024-11-04 16:32:31.297970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.547 [2024-11-04 16:32:31.297975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.547 [2024-11-04 16:32:31.297978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.297981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.297989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.297993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.297995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.298890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.298899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.298977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.298982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.298985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.298988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.298996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.299008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.299017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.299094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.299099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.299102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.299113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.299125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.299135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.299206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.299211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.299214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.548 [2024-11-04 16:32:31.299226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.548 [2024-11-04 16:32:31.299237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.548 [2024-11-04 16:32:31.299247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.548 [2024-11-04 16:32:31.299309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.548 [2024-11-04 16:32:31.299315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.548 [2024-11-04 16:32:31.299317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.548 [2024-11-04 16:32:31.299321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.549 [2024-11-04 16:32:31.299329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.549 [2024-11-04 16:32:31.299340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-04 16:32:31.299350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.549 [2024-11-04 16:32:31.299427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.549 [2024-11-04 16:32:31.299432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.549 [2024-11-04 16:32:31.299435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.549 [2024-11-04 16:32:31.299446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.549 [2024-11-04 16:32:31.299458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-04 16:32:31.299468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.549 [2024-11-04 16:32:31.299542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.549 [2024-11-04 16:32:31.299547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.549 [2024-11-04 16:32:31.299550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.549 [2024-11-04 16:32:31.299561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.299568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.549 [2024-11-04 16:32:31.299573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-04 16:32:31.299582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.549 [2024-11-04 16:32:31.303607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.549 [2024-11-04 16:32:31.303614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.549 [2024-11-04 16:32:31.303617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.303621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.549 [2024-11-04 16:32:31.303630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.303633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.303636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9f690) 00:21:04.549 [2024-11-04 16:32:31.303642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.549 [2024-11-04 16:32:31.303652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd01580, cid 3, qid 0 00:21:04.549 [2024-11-04 16:32:31.303805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.549 [2024-11-04 16:32:31.303811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.549 [2024-11-04 16:32:31.303814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.549 [2024-11-04 16:32:31.303817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd01580) on tqpair=0xc9f690 00:21:04.549 [2024-11-04 16:32:31.303823] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:04.549 00:21:04.549 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:04.549 [2024-11-04 16:32:31.342877] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:04.549 [2024-11-04 16:32:31.342925] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892818 ] 00:21:04.813 [2024-11-04 16:32:31.381783] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:04.813 [2024-11-04 16:32:31.381820] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:04.813 [2024-11-04 16:32:31.381825] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:04.813 [2024-11-04 16:32:31.381834] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:04.813 [2024-11-04 16:32:31.381840] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:04.813 [2024-11-04 16:32:31.385787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:04.813 [2024-11-04 16:32:31.385812] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e8f690 0 00:21:04.813 [2024-11-04 16:32:31.392612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:04.813 [2024-11-04 16:32:31.392626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:04.813 [2024-11-04 16:32:31.392631] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:04.813 [2024-11-04 16:32:31.392634] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:04.813 [2024-11-04 16:32:31.392660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.813 [2024-11-04 16:32:31.392664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.813 [2024-11-04 16:32:31.392667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.813 [2024-11-04 16:32:31.392679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:04.813 [2024-11-04 16:32:31.392695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.813 [2024-11-04 16:32:31.399608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.813 [2024-11-04 16:32:31.399616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.813 [2024-11-04 16:32:31.399619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.813 [2024-11-04 16:32:31.399622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.813 [2024-11-04 16:32:31.399633] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:04.814 [2024-11-04 16:32:31.399638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:04.814 [2024-11-04 16:32:31.399643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:04.814 [2024-11-04 16:32:31.399653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.399667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.399679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.399835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.399841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.399844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.399851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:04.814 [2024-11-04 16:32:31.399858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:04.814 [2024-11-04 16:32:31.399864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.399876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.399885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.399947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.399953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.399955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.399963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:04.814 [2024-11-04 16:32:31.399969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.399975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.399981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.399987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.399999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.400065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.400070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.400073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.400081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.400089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.400101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.400109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.400177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.400183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.400186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.400192] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:04.814 [2024-11-04 16:32:31.400197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.400203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.400310] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:04.814 [2024-11-04 16:32:31.400314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.400320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.400332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.400342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.400401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.400406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.400409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.400416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:04.814 [2024-11-04 16:32:31.400424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.400440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.400449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.400511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.400516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.400519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.400526] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:04.814 [2024-11-04 16:32:31.400530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.400536] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:04.814 [2024-11-04 16:32:31.400543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.400550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.400558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.400568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.400662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.814 [2024-11-04 16:32:31.400668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.814 [2024-11-04 16:32:31.400671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=4096, cccid=0 00:21:04.814 [2024-11-04 16:32:31.400680] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1100) on tqpair(0x1e8f690): expected_datao=0, payload_size=4096 00:21:04.814 [2024-11-04 16:32:31.400683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400697] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.400701] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.445617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.445620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.445631] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:04.814 [2024-11-04 16:32:31.445636] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:04.814 [2024-11-04 16:32:31.445640] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:04.814 [2024-11-04 16:32:31.445644] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:04.814 [2024-11-04 16:32:31.445651] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:04.814 [2024-11-04 16:32:31.445656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.445664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.445673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.814 [2024-11-04 16:32:31.445712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.445865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.445871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.445874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.445885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.814 [2024-11-04 16:32:31.445903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.814 [2024-11-04 16:32:31.445920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.814 [2024-11-04 16:32:31.445938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.814 [2024-11-04 16:32:31.445953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.445961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.445967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.445970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.445976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.445987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1100, cid 0, qid 0 00:21:04.814 [2024-11-04 16:32:31.445992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1280, cid 1, qid 0 00:21:04.814 [2024-11-04 16:32:31.445996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1400, cid 2, qid 0 00:21:04.814 [2024-11-04 16:32:31.446002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.814 [2024-11-04 16:32:31.446007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.814 [2024-11-04 16:32:31.446104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.446110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.446113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.446123] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:04.814 [2024-11-04 16:32:31.446128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.446135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.446141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.446147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.446159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:04.814 [2024-11-04 16:32:31.446169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.814 [2024-11-04 16:32:31.446231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.446237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.446241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.446295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.446304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.446312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.446320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.446331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.814 [2024-11-04 16:32:31.446406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.814 [2024-11-04 16:32:31.446412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.814 [2024-11-04 16:32:31.446416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446419] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=4096, cccid=4 00:21:04.814 [2024-11-04 16:32:31.446423] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1700) on tqpair(0x1e8f690): expected_datao=0, payload_size=4096 00:21:04.814 [2024-11-04 16:32:31.446427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446440] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.446445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.487736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.814 [2024-11-04 16:32:31.487749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.814 [2024-11-04 16:32:31.487752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.487755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.814 [2024-11-04 16:32:31.487765] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:04.814 [2024-11-04 16:32:31.487779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.487788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:04.814 [2024-11-04 16:32:31.487795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.487798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.814 [2024-11-04 16:32:31.487804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.814 [2024-11-04 16:32:31.487816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.814 [2024-11-04 16:32:31.487898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.814 [2024-11-04 16:32:31.487904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.814 [2024-11-04 16:32:31.487906] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.814 [2024-11-04 16:32:31.487909] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=4096, cccid=4 00:21:04.815 [2024-11-04 16:32:31.487913] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1700) on tqpair(0x1e8f690): expected_datao=0, payload_size=4096 00:21:04.815 [2024-11-04 16:32:31.487917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.487930] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.487934] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.532621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.532624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.532641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.532651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.532658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.532668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.532680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.815 [2024-11-04 16:32:31.532749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.815 [2024-11-04 16:32:31.532755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.815 [2024-11-04 16:32:31.532758] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=4096, cccid=4 00:21:04.815 [2024-11-04 16:32:31.532765] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1700) on tqpair(0x1e8f690): expected_datao=0, payload_size=4096 00:21:04.815 [2024-11-04 16:32:31.532774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.532793] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.573745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.573748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.573759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573794] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:04.815 [2024-11-04 16:32:31.573799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:04.815 [2024-11-04 16:32:31.573803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:04.815 [2024-11-04 16:32:31.573815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.573825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.573830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.573842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.815 [2024-11-04 16:32:31.573855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.815 [2024-11-04 16:32:31.573860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1880, cid 5, qid 0 00:21:04.815 [2024-11-04 16:32:31.573938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.573944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.573947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.573956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.573960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.573963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1880) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.573977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.573980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.573986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.573995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1880, cid 5, qid 0 00:21:04.815 [2024-11-04 16:32:31.574058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.574064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.574067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1880) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.574077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1880, cid 5, qid 0 00:21:04.815 [2024-11-04 16:32:31.574158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.574163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.574166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1880) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.574177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1880, cid 5, qid 0 00:21:04.815 [2024-11-04 16:32:31.574252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.574258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.574261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1880) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.574279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e8f690) 00:21:04.815 [2024-11-04 16:32:31.574334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.815 [2024-11-04 16:32:31.574345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1880, cid 5, qid 0 00:21:04.815 [2024-11-04 16:32:31.574349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1700, cid 4, qid 0 00:21:04.815 [2024-11-04 16:32:31.574353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1a00, cid 6, qid 0 00:21:04.815 [2024-11-04 16:32:31.574357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1b80, cid 7, qid 0 00:21:04.815 [2024-11-04 16:32:31.574494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.815 [2024-11-04 16:32:31.574500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.815 [2024-11-04 16:32:31.574503] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574506] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=8192, cccid=5 00:21:04.815 [2024-11-04 16:32:31.574510] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1880) on tqpair(0x1e8f690): expected_datao=0, payload_size=8192 00:21:04.815 [2024-11-04 16:32:31.574514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574550] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574554] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.815 [2024-11-04 16:32:31.574563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.815 [2024-11-04 16:32:31.574566] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574569] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=512, cccid=4 00:21:04.815 [2024-11-04 16:32:31.574573] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1700) on tqpair(0x1e8f690): expected_datao=0, payload_size=512 00:21:04.815 [2024-11-04 16:32:31.574577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574582] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574585] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.574590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.815 [2024-11-04 16:32:31.574594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.815 [2024-11-04 16:32:31.574597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578608] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=512, cccid=6 00:21:04.815 [2024-11-04 16:32:31.578612] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1a00) on tqpair(0x1e8f690): expected_datao=0, payload_size=512 00:21:04.815 [2024-11-04 16:32:31.578616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578624] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:04.815 [2024-11-04 16:32:31.578634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:04.815 [2024-11-04 16:32:31.578637] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578640] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f690): datao=0, datal=4096, cccid=7 00:21:04.815 [2024-11-04 16:32:31.578643] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef1b80) on tqpair(0x1e8f690): expected_datao=0, payload_size=4096 00:21:04.815 [2024-11-04 16:32:31.578649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578658] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.578670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.578673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1880) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.578686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.578691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.578694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1700) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.578706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.578711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.578714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1a00) on tqpair=0x1e8f690 00:21:04.815 [2024-11-04 16:32:31.578722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.815 [2024-11-04 16:32:31.578727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.815 [2024-11-04 16:32:31.578730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.815 [2024-11-04 16:32:31.578733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1b80) on tqpair=0x1e8f690 00:21:04.815 ===================================================== 00:21:04.815 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.815 ===================================================== 00:21:04.815 Controller Capabilities/Features 00:21:04.815 ================================ 00:21:04.815 Vendor ID: 8086 00:21:04.815 Subsystem Vendor ID: 8086 00:21:04.815 Serial Number: SPDK00000000000001 00:21:04.815 Model Number: SPDK bdev Controller 00:21:04.815 Firmware Version: 25.01 00:21:04.815 Recommended Arb Burst: 6 00:21:04.815 IEEE OUI Identifier: e4 d2 5c 00:21:04.815 Multi-path I/O 00:21:04.815 May have multiple subsystem ports: Yes 00:21:04.815 May have multiple controllers: Yes 00:21:04.815 Associated with SR-IOV VF: No 00:21:04.815 Max Data Transfer Size: 131072 00:21:04.815 Max Number of Namespaces: 32 00:21:04.815 Max Number of I/O Queues: 127 00:21:04.815 NVMe Specification Version (VS): 1.3 00:21:04.815 NVMe Specification Version (Identify): 1.3 00:21:04.815 Maximum Queue Entries: 128 00:21:04.815 Contiguous Queues Required: Yes 00:21:04.815 Arbitration Mechanisms Supported 00:21:04.815 Weighted Round Robin: Not Supported 00:21:04.815 Vendor Specific: Not Supported 00:21:04.815 Reset Timeout: 15000 ms 00:21:04.815 Doorbell Stride: 4 bytes 00:21:04.815 NVM Subsystem Reset: Not Supported 00:21:04.815 Command Sets Supported 00:21:04.815 NVM Command Set: Supported 00:21:04.815 Boot Partition: Not Supported 00:21:04.815 Memory Page Size Minimum: 4096 bytes 00:21:04.815 Memory Page Size Maximum: 4096 bytes 00:21:04.815 Persistent Memory Region: Not Supported 00:21:04.815 Optional Asynchronous Events Supported 00:21:04.815 Namespace Attribute Notices: Supported 00:21:04.815 Firmware Activation Notices: Not Supported 00:21:04.815 ANA Change Notices: Not Supported 00:21:04.815 PLE Aggregate Log Change Notices: Not Supported 00:21:04.815 LBA Status Info Alert Notices: Not Supported 00:21:04.815 EGE Aggregate Log Change Notices: Not Supported 00:21:04.815 Normal NVM Subsystem Shutdown event: Not Supported 00:21:04.815 Zone Descriptor Change Notices: Not Supported 00:21:04.815 Discovery Log Change Notices: Not Supported 00:21:04.815 Controller Attributes 00:21:04.815 128-bit Host Identifier: Supported 00:21:04.815 Non-Operational Permissive Mode: Not Supported 00:21:04.815 NVM Sets: Not Supported 00:21:04.815 Read Recovery Levels: Not Supported 00:21:04.815 Endurance Groups: Not Supported 00:21:04.815 Predictable Latency Mode: Not Supported 00:21:04.815 Traffic Based Keep ALive: Not Supported 00:21:04.815 Namespace Granularity: Not Supported 00:21:04.815 SQ Associations: Not Supported 00:21:04.815 UUID List: Not Supported 00:21:04.815 Multi-Domain Subsystem: Not Supported 00:21:04.815 Fixed Capacity Management: Not Supported 00:21:04.815 Variable Capacity Management: Not Supported 00:21:04.815 Delete Endurance Group: Not Supported 00:21:04.815 Delete NVM Set: Not Supported 00:21:04.815 Extended LBA Formats Supported: Not Supported 00:21:04.815 Flexible Data Placement Supported: Not Supported 00:21:04.815 00:21:04.815 Controller Memory Buffer Support 00:21:04.815 ================================ 00:21:04.815 Supported: No 00:21:04.815 00:21:04.815 Persistent Memory Region Support 00:21:04.815 ================================ 00:21:04.815 Supported: No 00:21:04.815 00:21:04.815 Admin Command Set Attributes 00:21:04.815 ============================ 00:21:04.815 Security Send/Receive: Not Supported 00:21:04.815 Format NVM: Not Supported 00:21:04.815 Firmware Activate/Download: Not Supported 00:21:04.815 Namespace Management: Not Supported 00:21:04.815 Device Self-Test: Not Supported 00:21:04.815 Directives: Not Supported 00:21:04.815 NVMe-MI: Not Supported 00:21:04.815 Virtualization Management: Not Supported 00:21:04.815 Doorbell Buffer Config: Not Supported 00:21:04.815 Get LBA Status Capability: Not Supported 00:21:04.815 Command & Feature Lockdown Capability: Not Supported 00:21:04.815 Abort Command Limit: 4 00:21:04.815 Async Event Request Limit: 4 00:21:04.815 Number of Firmware Slots: N/A 00:21:04.815 Firmware Slot 1 Read-Only: N/A 00:21:04.815 Firmware Activation Without Reset: N/A 00:21:04.815 Multiple Update Detection Support: N/A 00:21:04.815 Firmware Update Granularity: No Information Provided 00:21:04.815 Per-Namespace SMART Log: No 00:21:04.815 Asymmetric Namespace Access Log Page: Not Supported 00:21:04.815 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:04.815 Command Effects Log Page: Supported 00:21:04.815 Get Log Page Extended Data: Supported 00:21:04.815 Telemetry Log Pages: Not Supported 00:21:04.815 Persistent Event Log Pages: Not Supported 00:21:04.815 Supported Log Pages Log Page: May Support 00:21:04.815 Commands Supported & Effects Log Page: Not Supported 00:21:04.815 Feature Identifiers & Effects Log Page:May Support 00:21:04.815 NVMe-MI Commands & Effects Log Page: May Support 00:21:04.815 Data Area 4 for Telemetry Log: Not Supported 00:21:04.815 Error Log Page Entries Supported: 128 00:21:04.815 Keep Alive: Supported 00:21:04.815 Keep Alive Granularity: 10000 ms 00:21:04.815 00:21:04.816 NVM Command Set Attributes 00:21:04.816 ========================== 00:21:04.816 Submission Queue Entry Size 00:21:04.816 Max: 64 00:21:04.816 Min: 64 00:21:04.816 Completion Queue Entry Size 00:21:04.816 Max: 16 00:21:04.816 Min: 16 00:21:04.816 Number of Namespaces: 32 00:21:04.816 Compare Command: Supported 00:21:04.816 Write Uncorrectable Command: Not Supported 00:21:04.816 Dataset Management Command: Supported 00:21:04.816 Write Zeroes Command: Supported 00:21:04.816 Set Features Save Field: Not Supported 00:21:04.816 Reservations: Supported 00:21:04.816 Timestamp: Not Supported 00:21:04.816 Copy: Supported 00:21:04.816 Volatile Write Cache: Present 00:21:04.816 Atomic Write Unit (Normal): 1 00:21:04.816 Atomic Write Unit (PFail): 1 00:21:04.816 Atomic Compare & Write Unit: 1 00:21:04.816 Fused Compare & Write: Supported 00:21:04.816 Scatter-Gather List 00:21:04.816 SGL Command Set: Supported 00:21:04.816 SGL Keyed: Supported 00:21:04.816 SGL Bit Bucket Descriptor: Not Supported 00:21:04.816 SGL Metadata Pointer: Not Supported 00:21:04.816 Oversized SGL: Not Supported 00:21:04.816 SGL Metadata Address: Not Supported 00:21:04.816 SGL Offset: Supported 00:21:04.816 Transport SGL Data Block: Not Supported 00:21:04.816 Replay Protected Memory Block: Not Supported 00:21:04.816 00:21:04.816 Firmware Slot Information 00:21:04.816 ========================= 00:21:04.816 Active slot: 1 00:21:04.816 Slot 1 Firmware Revision: 25.01 00:21:04.816 00:21:04.816 00:21:04.816 Commands Supported and Effects 00:21:04.816 ============================== 00:21:04.816 Admin Commands 00:21:04.816 -------------- 00:21:04.816 Get Log Page (02h): Supported 00:21:04.816 Identify (06h): Supported 00:21:04.816 Abort (08h): Supported 00:21:04.816 Set Features (09h): Supported 00:21:04.816 Get Features (0Ah): Supported 00:21:04.816 Asynchronous Event Request (0Ch): Supported 00:21:04.816 Keep Alive (18h): Supported 00:21:04.816 I/O Commands 00:21:04.816 ------------ 00:21:04.816 Flush (00h): Supported LBA-Change 00:21:04.816 Write (01h): Supported LBA-Change 00:21:04.816 Read (02h): Supported 00:21:04.816 Compare (05h): Supported 00:21:04.816 Write Zeroes (08h): Supported LBA-Change 00:21:04.816 Dataset Management (09h): Supported LBA-Change 00:21:04.816 Copy (19h): Supported LBA-Change 00:21:04.816 00:21:04.816 Error Log 00:21:04.816 ========= 00:21:04.816 00:21:04.816 Arbitration 00:21:04.816 =========== 00:21:04.816 Arbitration Burst: 1 00:21:04.816 00:21:04.816 Power Management 00:21:04.816 ================ 00:21:04.816 Number of Power States: 1 00:21:04.816 Current Power State: Power State #0 00:21:04.816 Power State #0: 00:21:04.816 Max Power: 0.00 W 00:21:04.816 Non-Operational State: Operational 00:21:04.816 Entry Latency: Not Reported 00:21:04.816 Exit Latency: Not Reported 00:21:04.816 Relative Read Throughput: 0 00:21:04.816 Relative Read Latency: 0 00:21:04.816 Relative Write Throughput: 0 00:21:04.816 Relative Write Latency: 0 00:21:04.816 Idle Power: Not Reported 00:21:04.816 Active Power: Not Reported 00:21:04.816 Non-Operational Permissive Mode: Not Supported 00:21:04.816 00:21:04.816 Health Information 00:21:04.816 ================== 00:21:04.816 Critical Warnings: 00:21:04.816 Available Spare Space: OK 00:21:04.816 Temperature: OK 00:21:04.816 Device Reliability: OK 00:21:04.816 Read Only: No 00:21:04.816 Volatile Memory Backup: OK 00:21:04.816 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:04.816 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:04.816 Available Spare: 0% 00:21:04.816 Available Spare Threshold: 0% 00:21:04.816 Life Percentage Used:[2024-11-04 16:32:31.578813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.578817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.578823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.578835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1b80, cid 7, qid 0 00:21:04.816 [2024-11-04 16:32:31.578993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.578999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1b80) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579031] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:04.816 [2024-11-04 16:32:31.579038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1100) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.816 [2024-11-04 16:32:31.579048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1280) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.816 [2024-11-04 16:32:31.579057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1400) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.816 [2024-11-04 16:32:31.579065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.816 [2024-11-04 16:32:31.579076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579299] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:04.816 [2024-11-04 16:32:31.579303] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:04.816 [2024-11-04 16:32:31.579311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.579945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.579950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.579953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.579966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.579972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.579978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.579987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.580064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.580069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.580072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.580076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.580083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.580087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.580090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.816 [2024-11-04 16:32:31.580095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.816 [2024-11-04 16:32:31.580104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.816 [2024-11-04 16:32:31.580174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.816 [2024-11-04 16:32:31.580179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.816 [2024-11-04 16:32:31.580182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.580185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.816 [2024-11-04 16:32:31.580194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.816 [2024-11-04 16:32:31.580197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.580941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.580947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.580950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.580961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.580967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.580973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.580982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.581934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.581939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.581942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.581954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.581960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.581965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.581975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.582038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.582044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.582046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.582057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.582069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.582078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.582157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.582162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.582165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.582176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.582182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.582188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.582198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.585607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.585615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.585618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.585621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.585631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.585635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.585638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f690) 00:21:04.817 [2024-11-04 16:32:31.585643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.817 [2024-11-04 16:32:31.585654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef1580, cid 3, qid 0 00:21:04.817 [2024-11-04 16:32:31.585808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:04.817 [2024-11-04 16:32:31.585813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:04.817 [2024-11-04 16:32:31.585816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:04.817 [2024-11-04 16:32:31.585819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef1580) on tqpair=0x1e8f690 00:21:04.817 [2024-11-04 16:32:31.585826] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:04.817 0% 00:21:04.817 Data Units Read: 0 00:21:04.817 Data Units Written: 0 00:21:04.817 Host Read Commands: 0 00:21:04.817 Host Write Commands: 0 00:21:04.817 Controller Busy Time: 0 minutes 00:21:04.817 Power Cycles: 0 00:21:04.817 Power On Hours: 0 hours 00:21:04.817 Unsafe Shutdowns: 0 00:21:04.817 Unrecoverable Media Errors: 0 00:21:04.817 Lifetime Error Log Entries: 0 00:21:04.817 Warning Temperature Time: 0 minutes 00:21:04.817 Critical Temperature Time: 0 minutes 00:21:04.817 00:21:04.817 Number of Queues 00:21:04.817 ================ 00:21:04.817 Number of I/O Submission Queues: 127 00:21:04.817 Number of I/O Completion Queues: 127 00:21:04.817 00:21:04.817 Active Namespaces 00:21:04.817 ================= 00:21:04.817 Namespace ID:1 00:21:04.817 Error Recovery Timeout: Unlimited 00:21:04.817 Command Set Identifier: NVM (00h) 00:21:04.817 Deallocate: Supported 00:21:04.817 Deallocated/Unwritten Error: Not Supported 00:21:04.817 Deallocated Read Value: Unknown 00:21:04.817 Deallocate in Write Zeroes: Not Supported 00:21:04.817 Deallocated Guard Field: 0xFFFF 00:21:04.817 Flush: Supported 00:21:04.817 Reservation: Supported 00:21:04.817 Namespace Sharing Capabilities: Multiple Controllers 00:21:04.817 Size (in LBAs): 131072 (0GiB) 00:21:04.817 Capacity (in LBAs): 131072 (0GiB) 00:21:04.817 Utilization (in LBAs): 131072 (0GiB) 00:21:04.817 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:04.817 EUI64: ABCDEF0123456789 00:21:04.817 UUID: 9141657a-9a8a-478a-9dbf-94252550657d 00:21:04.817 Thin Provisioning: Not Supported 00:21:04.817 Per-NS Atomic Units: Yes 00:21:04.817 Atomic Boundary Size (Normal): 0 00:21:04.817 Atomic Boundary Size (PFail): 0 00:21:04.817 Atomic Boundary Offset: 0 00:21:04.817 Maximum Single Source Range Length: 65535 00:21:04.817 Maximum Copy Length: 65535 00:21:04.817 Maximum Source Range Count: 1 00:21:04.817 NGUID/EUI64 Never Reused: No 00:21:04.817 Namespace Write Protected: No 00:21:04.818 Number of LBA Formats: 1 00:21:04.818 Current LBA Format: LBA Format #00 00:21:04.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:04.818 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.818 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.818 rmmod nvme_tcp 00:21:05.076 rmmod nvme_fabrics 00:21:05.076 rmmod nvme_keyring 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2892675 ']' 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2892675 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2892675 ']' 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2892675 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892675 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892675' 00:21:05.076 killing process with pid 2892675 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2892675 00:21:05.076 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2892675 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.334 16:32:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.237 16:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.237 00:21:07.237 real 0m8.611s 00:21:07.237 user 0m5.695s 00:21:07.237 sys 0m4.191s 00:21:07.237 16:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.237 16:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 ************************************ 00:21:07.237 END TEST nvmf_identify 00:21:07.237 ************************************ 00:21:07.237 16:32:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:07.237 16:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.237 16:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.237 16:32:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 ************************************ 00:21:07.237 START TEST nvmf_perf 00:21:07.237 ************************************ 00:21:07.237 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:07.497 * Looking for test storage... 00:21:07.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.497 --rc genhtml_branch_coverage=1 00:21:07.497 --rc genhtml_function_coverage=1 00:21:07.497 --rc genhtml_legend=1 00:21:07.497 --rc geninfo_all_blocks=1 00:21:07.497 --rc geninfo_unexecuted_blocks=1 00:21:07.497 00:21:07.497 ' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.497 --rc genhtml_branch_coverage=1 00:21:07.497 --rc genhtml_function_coverage=1 00:21:07.497 --rc genhtml_legend=1 00:21:07.497 --rc geninfo_all_blocks=1 00:21:07.497 --rc geninfo_unexecuted_blocks=1 00:21:07.497 00:21:07.497 ' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.497 --rc genhtml_branch_coverage=1 00:21:07.497 --rc genhtml_function_coverage=1 00:21:07.497 --rc genhtml_legend=1 00:21:07.497 --rc geninfo_all_blocks=1 00:21:07.497 --rc geninfo_unexecuted_blocks=1 00:21:07.497 00:21:07.497 ' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.497 --rc genhtml_branch_coverage=1 00:21:07.497 --rc genhtml_function_coverage=1 00:21:07.497 --rc genhtml_legend=1 00:21:07.497 --rc geninfo_all_blocks=1 00:21:07.497 --rc geninfo_unexecuted_blocks=1 00:21:07.497 00:21:07.497 ' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.497 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.498 16:32:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.062 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.062 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.063 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:21:14.063 00:21:14.063 --- 10.0.0.2 ping statistics --- 00:21:14.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.063 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:14.063 00:21:14.063 --- 10.0.0.1 ping statistics --- 00:21:14.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.063 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.063 16:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2896341 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2896341 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2896341 ']' 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.063 [2024-11-04 16:32:40.040967] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:14.063 [2024-11-04 16:32:40.041021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.063 [2024-11-04 16:32:40.103648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.063 [2024-11-04 16:32:40.146954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.063 [2024-11-04 16:32:40.146990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.063 [2024-11-04 16:32:40.146997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.063 [2024-11-04 16:32:40.147002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.063 [2024-11-04 16:32:40.147007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.063 [2024-11-04 16:32:40.148568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.063 [2024-11-04 16:32:40.148656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.063 [2024-11-04 16:32:40.148708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.063 [2024-11-04 16:32:40.148709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:14.063 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.064 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.064 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.064 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.064 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:14.064 16:32:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:16.596 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:16.596 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:16.854 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:16.854 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.112 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:17.112 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:17.112 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:17.113 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:17.113 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.371 [2024-11-04 16:32:43.942067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.371 16:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.371 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:17.371 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.629 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:17.629 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:17.888 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.145 [2024-11-04 16:32:44.738460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.145 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:18.403 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:18.403 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:18.403 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:18.403 16:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:19.778 Initializing NVMe Controllers 00:21:19.778 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:19.778 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:19.778 Initialization complete. Launching workers. 00:21:19.778 ======================================================== 00:21:19.778 Latency(us) 00:21:19.778 Device Information : IOPS MiB/s Average min max 00:21:19.778 PCIE (0000:5e:00.0) NSID 1 from core 0: 100262.57 391.65 318.88 34.54 5202.64 00:21:19.778 ======================================================== 00:21:19.778 Total : 100262.57 391.65 318.88 34.54 5202.64 00:21:19.778 00:21:19.778 16:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:21.152 Initializing NVMe Controllers 00:21:21.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:21.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:21.152 Initialization complete. Launching workers. 00:21:21.152 ======================================================== 00:21:21.152 Latency(us) 00:21:21.152 Device Information : IOPS MiB/s Average min max 00:21:21.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 267.00 1.04 3831.02 117.04 45811.25 00:21:21.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19681.31 7197.60 47905.58 00:21:21.152 ======================================================== 00:21:21.152 Total : 318.00 1.24 6373.05 117.04 47905.58 00:21:21.152 00:21:21.152 16:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:22.087 Initializing NVMe Controllers 00:21:22.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:22.087 Initialization complete. Launching workers. 00:21:22.087 ======================================================== 00:21:22.087 Latency(us) 00:21:22.087 Device Information : IOPS MiB/s Average min max 00:21:22.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11146.99 43.54 2871.22 401.92 6752.28 00:21:22.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3793.00 14.82 8478.42 6301.09 15995.10 00:21:22.087 ======================================================== 00:21:22.087 Total : 14939.99 58.36 4294.79 401.92 15995.10 00:21:22.087 00:21:22.087 16:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:22.087 16:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:22.087 16:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:25.384 Initializing NVMe Controllers 00:21:25.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.384 Controller IO queue size 128, less than required. 00:21:25.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.384 Controller IO queue size 128, less than required. 00:21:25.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:25.384 Initialization complete. Launching workers. 00:21:25.384 ======================================================== 00:21:25.384 Latency(us) 00:21:25.384 Device Information : IOPS MiB/s Average min max 00:21:25.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1439.50 359.88 90592.28 58605.97 145624.61 00:21:25.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.50 151.38 217933.27 71106.45 325463.99 00:21:25.384 ======================================================== 00:21:25.384 Total : 2045.00 511.25 128296.42 58605.97 325463.99 00:21:25.384 00:21:25.384 16:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:25.384 No valid NVMe controllers or AIO or URING devices found 00:21:25.384 Initializing NVMe Controllers 00:21:25.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.384 Controller IO queue size 128, less than required. 00:21:25.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.384 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:25.384 Controller IO queue size 128, less than required. 00:21:25.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.384 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:25.384 WARNING: Some requested NVMe devices were skipped 00:21:25.384 16:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:27.918 Initializing NVMe Controllers 00:21:27.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.918 Controller IO queue size 128, less than required. 00:21:27.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:27.918 Controller IO queue size 128, less than required. 00:21:27.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:27.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:27.918 Initialization complete. Launching workers. 00:21:27.918 00:21:27.918 ==================== 00:21:27.918 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:27.918 TCP transport: 00:21:27.918 polls: 11752 00:21:27.918 idle_polls: 7922 00:21:27.918 sock_completions: 3830 00:21:27.918 nvme_completions: 6429 00:21:27.918 submitted_requests: 9542 00:21:27.918 queued_requests: 1 00:21:27.918 00:21:27.918 ==================== 00:21:27.918 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:27.918 TCP transport: 00:21:27.918 polls: 11428 00:21:27.918 idle_polls: 7430 00:21:27.918 sock_completions: 3998 00:21:27.918 nvme_completions: 6247 00:21:27.918 submitted_requests: 9424 00:21:27.918 queued_requests: 1 00:21:27.918 ======================================================== 00:21:27.918 Latency(us) 00:21:27.918 Device Information : IOPS MiB/s Average min max 00:21:27.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1605.97 401.49 81831.54 56197.22 151587.71 00:21:27.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1560.50 390.12 82264.18 41511.22 127550.34 00:21:27.918 ======================================================== 00:21:27.918 Total : 3166.46 791.62 82044.75 41511.22 151587.71 00:21:27.918 00:21:27.918 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:27.918 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.177 rmmod nvme_tcp 00:21:28.177 rmmod nvme_fabrics 00:21:28.177 rmmod nvme_keyring 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2896341 ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2896341 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2896341 ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2896341 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896341 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896341' 00:21:28.177 killing process with pid 2896341 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2896341 00:21:28.177 16:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2896341 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.079 16:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:32.614 00:21:32.614 real 0m24.893s 00:21:32.614 user 1m6.622s 00:21:32.614 sys 0m8.032s 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:32.614 ************************************ 00:21:32.614 END TEST nvmf_perf 00:21:32.614 ************************************ 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.614 16:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.614 ************************************ 00:21:32.614 START TEST nvmf_fio_host 00:21:32.614 ************************************ 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:32.614 * Looking for test storage... 00:21:32.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:32.614 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:32.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.615 --rc genhtml_branch_coverage=1 00:21:32.615 --rc genhtml_function_coverage=1 00:21:32.615 --rc genhtml_legend=1 00:21:32.615 --rc geninfo_all_blocks=1 00:21:32.615 --rc geninfo_unexecuted_blocks=1 00:21:32.615 00:21:32.615 ' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:32.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.615 --rc genhtml_branch_coverage=1 00:21:32.615 --rc genhtml_function_coverage=1 00:21:32.615 --rc genhtml_legend=1 00:21:32.615 --rc geninfo_all_blocks=1 00:21:32.615 --rc geninfo_unexecuted_blocks=1 00:21:32.615 00:21:32.615 ' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:32.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.615 --rc genhtml_branch_coverage=1 00:21:32.615 --rc genhtml_function_coverage=1 00:21:32.615 --rc genhtml_legend=1 00:21:32.615 --rc geninfo_all_blocks=1 00:21:32.615 --rc geninfo_unexecuted_blocks=1 00:21:32.615 00:21:32.615 ' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:32.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.615 --rc genhtml_branch_coverage=1 00:21:32.615 --rc genhtml_function_coverage=1 00:21:32.615 --rc genhtml_legend=1 00:21:32.615 --rc geninfo_all_blocks=1 00:21:32.615 --rc geninfo_unexecuted_blocks=1 00:21:32.615 00:21:32.615 ' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:32.615 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:32.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.616 16:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:37.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:37.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:37.994 Found net devices under 0000:86:00.0: cvl_0_0 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:37.994 Found net devices under 0000:86:00.1: cvl_0_1 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.994 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:21:37.995 00:21:37.995 --- 10.0.0.2 ping statistics --- 00:21:37.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.995 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:37.995 00:21:37.995 --- 10.0.0.1 ping statistics --- 00:21:37.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.995 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2902724 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2902724 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2902724 ']' 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.995 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.995 [2024-11-04 16:33:04.654693] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:37.995 [2024-11-04 16:33:04.654742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.995 [2024-11-04 16:33:04.726802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.995 [2024-11-04 16:33:04.770661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.995 [2024-11-04 16:33:04.770696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.995 [2024-11-04 16:33:04.770704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.995 [2024-11-04 16:33:04.770711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.995 [2024-11-04 16:33:04.770716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.995 [2024-11-04 16:33:04.772153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.995 [2024-11-04 16:33:04.772250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.995 [2024-11-04 16:33:04.772338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.995 [2024-11-04 16:33:04.772339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.284 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.284 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:38.284 16:33:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:38.284 [2024-11-04 16:33:05.040713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.284 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:38.284 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.284 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.284 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:38.543 Malloc1 00:21:38.543 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.801 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.059 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.319 [2024-11-04 16:33:05.907908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.319 16:33:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:39.598 16:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.868 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:39.868 fio-3.35 00:21:39.868 Starting 1 thread 00:21:42.393 00:21:42.393 test: (groupid=0, jobs=1): err= 0: pid=2903173: Mon Nov 4 16:33:08 2024 00:21:42.393 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec) 00:21:42.393 slat (nsec): min=1534, max=265224, avg=1912.70, stdev=2460.75 00:21:42.393 clat (usec): min=3207, max=10183, avg=5954.35, stdev=447.09 00:21:42.393 lat (usec): min=3237, max=10185, avg=5956.26, stdev=446.94 00:21:42.393 clat percentiles (usec): 00:21:42.393 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:21:42.393 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:21:42.393 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:21:42.393 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8848], 99.95th=[ 9372], 00:21:42.393 | 99.99th=[10159] 00:21:42.393 bw ( KiB/s): min=46400, max=47840, per=99.95%, avg=47346.00, stdev=662.81, samples=4 00:21:42.393 iops : min=11600, max=11960, avg=11836.50, stdev=165.70, samples=4 00:21:42.393 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(92.3MiB/2005msec); 0 zone resets 00:21:42.393 slat (nsec): min=1580, max=263932, avg=1949.58, stdev=1875.55 00:21:42.393 clat (usec): min=2589, max=9142, avg=4819.96, stdev=365.45 00:21:42.393 lat (usec): min=2604, max=9143, avg=4821.91, stdev=365.37 00:21:42.393 clat percentiles (usec): 00:21:42.393 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:21:42.393 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:21:42.393 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:21:42.393 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 7242], 99.95th=[ 8356], 00:21:42.393 | 99.99th=[ 9110] 00:21:42.393 bw ( KiB/s): min=46736, max=47728, per=100.00%, avg=47156.00, stdev=440.78, samples=4 00:21:42.393 iops : min=11684, max=11932, avg=11789.00, stdev=110.19, samples=4 00:21:42.393 lat (msec) : 4=0.65%, 10=99.34%, 20=0.01% 00:21:42.393 cpu : usr=73.80%, sys=24.50%, ctx=149, majf=0, minf=3 00:21:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:42.393 issued rwts: total=23744,23634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:42.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:42.393 00:21:42.393 Run status group 0 (all jobs): 00:21:42.393 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:21:42.393 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:42.393 16:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:42.393 16:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.651 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:42.651 fio-3.35 00:21:42.651 Starting 1 thread 00:21:45.182 00:21:45.182 test: (groupid=0, jobs=1): err= 0: pid=2904058: Mon Nov 4 16:33:11 2024 00:21:45.182 read: IOPS=11.0k, BW=171MiB/s (179MB/s)(343MiB/2006msec) 00:21:45.182 slat (nsec): min=2507, max=92400, avg=2820.63, stdev=1230.08 00:21:45.182 clat (usec): min=1482, max=12940, avg=6739.64, stdev=1568.30 00:21:45.182 lat (usec): min=1485, max=12943, avg=6742.46, stdev=1568.39 00:21:45.182 clat percentiles (usec): 00:21:45.182 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5276], 00:21:45.182 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:21:45.182 | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9241], 00:21:45.182 | 99.00th=[10683], 99.50th=[11076], 99.90th=[12518], 99.95th=[12649], 00:21:45.182 | 99.99th=[12911] 00:21:45.182 bw ( KiB/s): min=87104, max=95552, per=51.40%, avg=90064.00, stdev=3754.11, samples=4 00:21:45.182 iops : min= 5444, max= 5972, avg=5629.00, stdev=234.63, samples=4 00:21:45.182 write: IOPS=6431, BW=100MiB/s (105MB/s)(184MiB/1826msec); 0 zone resets 00:21:45.182 slat (usec): min=29, max=239, avg=31.58, stdev= 5.61 00:21:45.182 clat (usec): min=3746, max=15140, avg=8605.25, stdev=1531.39 00:21:45.182 lat (usec): min=3775, max=15170, avg=8636.84, stdev=1531.94 00:21:45.182 clat percentiles (usec): 00:21:45.182 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7373], 00:21:45.182 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:21:45.182 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[11469], 00:21:45.182 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13698], 99.95th=[14091], 00:21:45.182 | 99.99th=[14353] 00:21:45.182 bw ( KiB/s): min=90240, max=99328, per=90.98%, avg=93624.00, stdev=3962.70, samples=4 00:21:45.182 iops : min= 5640, max= 6208, avg=5851.50, stdev=247.67, samples=4 00:21:45.182 lat (msec) : 2=0.05%, 4=1.54%, 10=90.73%, 20=7.69% 00:21:45.182 cpu : usr=86.18%, sys=13.12%, ctx=46, majf=0, minf=3 00:21:45.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:45.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:45.182 issued rwts: total=21969,11744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:45.182 00:21:45.182 Run status group 0 (all jobs): 00:21:45.182 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=343MiB (360MB), run=2006-2006msec 00:21:45.182 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=184MiB (192MB), run=1826-1826msec 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.182 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:45.183 rmmod nvme_tcp 00:21:45.183 rmmod nvme_fabrics 00:21:45.183 rmmod nvme_keyring 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2902724 ']' 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2902724 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2902724 ']' 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2902724 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:45.183 16:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.183 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902724 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902724' 00:21:45.442 killing process with pid 2902724 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2902724 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2902724 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.442 16:33:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.975 00:21:47.975 real 0m15.286s 00:21:47.975 user 0m46.516s 00:21:47.975 sys 0m6.083s 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.975 ************************************ 00:21:47.975 END TEST nvmf_fio_host 00:21:47.975 ************************************ 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.975 ************************************ 00:21:47.975 START TEST nvmf_failover 00:21:47.975 ************************************ 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:47.975 * Looking for test storage... 00:21:47.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.975 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.976 --rc genhtml_branch_coverage=1 00:21:47.976 --rc genhtml_function_coverage=1 00:21:47.976 --rc genhtml_legend=1 00:21:47.976 --rc geninfo_all_blocks=1 00:21:47.976 --rc geninfo_unexecuted_blocks=1 00:21:47.976 00:21:47.976 ' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.976 --rc genhtml_branch_coverage=1 00:21:47.976 --rc genhtml_function_coverage=1 00:21:47.976 --rc genhtml_legend=1 00:21:47.976 --rc geninfo_all_blocks=1 00:21:47.976 --rc geninfo_unexecuted_blocks=1 00:21:47.976 00:21:47.976 ' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.976 --rc genhtml_branch_coverage=1 00:21:47.976 --rc genhtml_function_coverage=1 00:21:47.976 --rc genhtml_legend=1 00:21:47.976 --rc geninfo_all_blocks=1 00:21:47.976 --rc geninfo_unexecuted_blocks=1 00:21:47.976 00:21:47.976 ' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.976 --rc genhtml_branch_coverage=1 00:21:47.976 --rc genhtml_function_coverage=1 00:21:47.976 --rc genhtml_legend=1 00:21:47.976 --rc geninfo_all_blocks=1 00:21:47.976 --rc geninfo_unexecuted_blocks=1 00:21:47.976 00:21:47.976 ' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.976 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.977 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.977 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:47.977 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:47.977 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.977 16:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.250 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.251 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.251 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.251 16:33:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.251 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.251 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.251 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:21:53.510 00:21:53.510 --- 10.0.0.2 ping statistics --- 00:21:53.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.510 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:21:53.510 00:21:53.510 --- 10.0.0.1 ping statistics --- 00:21:53.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.510 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2907980 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2907980 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2907980 ']' 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.510 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.511 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.511 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.511 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.511 [2024-11-04 16:33:20.187272] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:21:53.511 [2024-11-04 16:33:20.187317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.511 [2024-11-04 16:33:20.255457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.511 [2024-11-04 16:33:20.297627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.511 [2024-11-04 16:33:20.297662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.511 [2024-11-04 16:33:20.297672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.511 [2024-11-04 16:33:20.297680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.511 [2024-11-04 16:33:20.297687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.511 [2024-11-04 16:33:20.299063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.511 [2024-11-04 16:33:20.299149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.511 [2024-11-04 16:33:20.299152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.770 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:54.028 [2024-11-04 16:33:20.595123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.029 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:54.029 Malloc0 00:21:54.029 16:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.287 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.546 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.804 [2024-11-04 16:33:21.388172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.805 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.805 [2024-11-04 16:33:21.592805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.805 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.064 [2024-11-04 16:33:21.781418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2908356 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2908356 /var/tmp/bdevperf.sock 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2908356 ']' 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.064 16:33:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:55.323 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.323 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:55.323 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:55.582 NVMe0n1 00:21:55.582 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:55.840 00:21:55.840 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2908371 00:21:55.840 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.840 16:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:56.776 16:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.034 [2024-11-04 16:33:23.775892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.775999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 [2024-11-04 16:33:23.776034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7472d0 is same with the state(6) to be set 00:21:57.035 16:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:00.321 16:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:00.321 00:22:00.321 16:33:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.580 [2024-11-04 16:33:27.354079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 [2024-11-04 16:33:27.354165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7480d0 is same with the state(6) to be set 00:22:00.580 16:33:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:03.866 16:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.866 [2024-11-04 16:33:30.567071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.866 16:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:04.802 16:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:05.060 16:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2908371 00:22:11.635 { 00:22:11.635 "results": [ 00:22:11.635 { 00:22:11.635 "job": "NVMe0n1", 00:22:11.635 "core_mask": "0x1", 00:22:11.635 "workload": "verify", 00:22:11.635 "status": "finished", 00:22:11.635 "verify_range": { 00:22:11.635 "start": 0, 00:22:11.635 "length": 16384 00:22:11.635 }, 00:22:11.635 "queue_depth": 128, 00:22:11.635 "io_size": 4096, 00:22:11.635 "runtime": 15.006958, 00:22:11.635 "iops": 10870.357603453012, 00:22:11.635 "mibps": 42.46233438848833, 00:22:11.635 "io_failed": 23813, 00:22:11.635 "io_timeout": 0, 00:22:11.635 "avg_latency_us": 10254.168501980732, 00:22:11.635 "min_latency_us": 413.50095238095236, 00:22:11.635 "max_latency_us": 13856.182857142858 00:22:11.635 } 00:22:11.635 ], 00:22:11.635 "core_count": 1 00:22:11.635 } 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2908356 ']' 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908356' 00:22:11.635 killing process with pid 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2908356 00:22:11.635 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:11.635 [2024-11-04 16:33:21.839940] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:22:11.635 [2024-11-04 16:33:21.839996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908356 ] 00:22:11.635 [2024-11-04 16:33:21.905260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.635 [2024-11-04 16:33:21.947346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.635 Running I/O for 15 seconds... 00:22:11.635 11037.00 IOPS, 43.11 MiB/s [2024-11-04T15:33:38.459Z] [2024-11-04 16:33:23.777762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.777986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.635 [2024-11-04 16:33:23.778125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.635 [2024-11-04 16:33:23.778132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.636 [2024-11-04 16:33:23.778701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.636 [2024-11-04 16:33:23.778709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.637 [2024-11-04 16:33:23.778716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.637 [2024-11-04 16:33:23.778731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.637 [2024-11-04 16:33:23.778745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.637 [2024-11-04 16:33:23.778759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.637 [2024-11-04 16:33:23.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.778800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.778807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.637 [2024-11-04 16:33:23.778854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.637 [2024-11-04 16:33:23.778868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.637 [2024-11-04 16:33:23.778882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.637 [2024-11-04 16:33:23.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.778901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb91340 is same with the state(6) to be set 00:22:11.637 [2024-11-04 16:33:23.779019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98408 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98416 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.637 [2024-11-04 16:33:23.779380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98424 len:8 PRP1 0x0 PRP2 0x0 00:22:11.637 [2024-11-04 16:33:23.779386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.637 [2024-11-04 16:33:23.779392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.637 [2024-11-04 16:33:23.779397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98432 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98496 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98512 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98520 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98528 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98536 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98576 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:22:11.638 [2024-11-04 16:33:23.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.638 [2024-11-04 16:33:23.779921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.638 [2024-11-04 16:33:23.779926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.638 [2024-11-04 16:33:23.779931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.779937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.779945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.779950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.779955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.779961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.779967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.779972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.779977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.779984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.779990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.779995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98592 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.639 [2024-11-04 16:33:23.780288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.639 [2024-11-04 16:33:23.780295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:22:11.639 [2024-11-04 16:33:23.780301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.639 [2024-11-04 16:33:23.780307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.780312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.780323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.780329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.780334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.780339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.780345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.780351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.780356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.780361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.780367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.780373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.780379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.780384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.640 [2024-11-04 16:33:23.785539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.640 [2024-11-04 16:33:23.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:22:11.640 [2024-11-04 16:33:23.785551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.640 [2024-11-04 16:33:23.785557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.785983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.785989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.785994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.785999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.786005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.786011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.786016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.641 [2024-11-04 16:33:23.786021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:22:11.641 [2024-11-04 16:33:23.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.641 [2024-11-04 16:33:23.786033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.641 [2024-11-04 16:33:23.786038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:22:11.642 [2024-11-04 16:33:23.786524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.642 [2024-11-04 16:33:23.786530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.642 [2024-11-04 16:33:23.786535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.642 [2024-11-04 16:33:23.786540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.643 [2024-11-04 16:33:23.786678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.643 [2024-11-04 16:33:23.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:22:11.643 [2024-11-04 16:33:23.786689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:23.786738] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:11.643 [2024-11-04 16:33:23.786747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:11.643 [2024-11-04 16:33:23.789708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:11.643 [2024-11-04 16:33:23.789734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb91340 (9): Bad file descriptor 00:22:11.643 [2024-11-04 16:33:23.941586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:11.643 10254.00 IOPS, 40.05 MiB/s [2024-11-04T15:33:38.467Z] 10670.33 IOPS, 41.68 MiB/s [2024-11-04T15:33:38.467Z] 10852.25 IOPS, 42.39 MiB/s [2024-11-04T15:33:38.467Z] [2024-11-04 16:33:27.355707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.643 [2024-11-04 16:33:27.355860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.355986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.355993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.643 [2024-11-04 16:33:27.356118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.643 [2024-11-04 16:33:27.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.644 [2024-11-04 16:33:27.356553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.644 [2024-11-04 16:33:27.356568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.644 [2024-11-04 16:33:27.356582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.644 [2024-11-04 16:33:27.356597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.644 [2024-11-04 16:33:27.356611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.644 [2024-11-04 16:33:27.356617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.645 [2024-11-04 16:33:27.356631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.645 [2024-11-04 16:33:27.356646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.356989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.356997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.357003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.357017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.357031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.357044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.645 [2024-11-04 16:33:27.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.645 [2024-11-04 16:33:27.357087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75512 len:8 PRP1 0x0 PRP2 0x0 00:22:11.645 [2024-11-04 16:33:27.357093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.645 [2024-11-04 16:33:27.357107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.645 [2024-11-04 16:33:27.357113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75520 len:8 PRP1 0x0 PRP2 0x0 00:22:11.645 [2024-11-04 16:33:27.357121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.645 [2024-11-04 16:33:27.357132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.645 [2024-11-04 16:33:27.357137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75528 len:8 PRP1 0x0 PRP2 0x0 00:22:11.645 [2024-11-04 16:33:27.357143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.645 [2024-11-04 16:33:27.357154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.645 [2024-11-04 16:33:27.357160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75536 len:8 PRP1 0x0 PRP2 0x0 00:22:11.645 [2024-11-04 16:33:27.357166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.645 [2024-11-04 16:33:27.357177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.645 [2024-11-04 16:33:27.357187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75544 len:8 PRP1 0x0 PRP2 0x0 00:22:11.645 [2024-11-04 16:33:27.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.645 [2024-11-04 16:33:27.357200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75552 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75560 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75576 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75584 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74880 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74888 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75592 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75600 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75608 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75616 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75624 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75632 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75640 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75648 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75656 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75664 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75672 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75680 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75688 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75696 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75704 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75712 len:8 PRP1 0x0 PRP2 0x0 00:22:11.646 [2024-11-04 16:33:27.357725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.646 [2024-11-04 16:33:27.357732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.646 [2024-11-04 16:33:27.357737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.646 [2024-11-04 16:33:27.357743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75720 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75728 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75736 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75744 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75752 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75760 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75768 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75776 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.647 [2024-11-04 16:33:27.357920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.647 [2024-11-04 16:33:27.357926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75784 len:8 PRP1 0x0 PRP2 0x0 00:22:11.647 [2024-11-04 16:33:27.357932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.357974] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:11.647 [2024-11-04 16:33:27.357996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.647 [2024-11-04 16:33:27.358004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.358011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.647 [2024-11-04 16:33:27.358017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.358024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.647 [2024-11-04 16:33:27.358030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.358037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.647 [2024-11-04 16:33:27.358043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:27.358050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:11.647 [2024-11-04 16:33:27.358081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb91340 (9): Bad file descriptor 00:22:11.647 [2024-11-04 16:33:27.361053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:11.647 [2024-11-04 16:33:27.504335] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:11.647 10605.80 IOPS, 41.43 MiB/s [2024-11-04T15:33:38.471Z] 10711.83 IOPS, 41.84 MiB/s [2024-11-04T15:33:38.471Z] 10766.00 IOPS, 42.05 MiB/s [2024-11-04T15:33:38.471Z] 10805.12 IOPS, 42.21 MiB/s [2024-11-04T15:33:38.471Z] 10873.00 IOPS, 42.47 MiB/s [2024-11-04T15:33:38.471Z] [2024-11-04 16:33:31.793811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.647 [2024-11-04 16:33:31.793939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.793954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.793968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.793982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.793990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.647 [2024-11-04 16:33:31.794095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.647 [2024-11-04 16:33:31.794103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.648 [2024-11-04 16:33:31.794182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.648 [2024-11-04 16:33:31.794657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.648 [2024-11-04 16:33:31.794666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.794989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.794997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.649 [2024-11-04 16:33:31.795156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.649 [2024-11-04 16:33:31.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.650 [2024-11-04 16:33:31.795591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.650 [2024-11-04 16:33:31.795609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.650 [2024-11-04 16:33:31.795674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.650 [2024-11-04 16:33:31.795682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.651 [2024-11-04 16:33:31.795696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.651 [2024-11-04 16:33:31.795711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecd40 is same with the state(6) to be set 00:22:11.651 [2024-11-04 16:33:31.795727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.651 [2024-11-04 16:33:31.795733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.651 [2024-11-04 16:33:31.795740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122784 len:8 PRP1 0x0 PRP2 0x0 00:22:11.651 [2024-11-04 16:33:31.795751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795794] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:11.651 [2024-11-04 16:33:31.795818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.651 [2024-11-04 16:33:31.795826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.651 [2024-11-04 16:33:31.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.651 [2024-11-04 16:33:31.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.651 [2024-11-04 16:33:31.795866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.651 [2024-11-04 16:33:31.795873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:11.651 [2024-11-04 16:33:31.798675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:11.651 [2024-11-04 16:33:31.798704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb91340 (9): Bad file descriptor 00:22:11.651 [2024-11-04 16:33:31.984358] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:11.651 10690.80 IOPS, 41.76 MiB/s [2024-11-04T15:33:38.475Z] 10726.64 IOPS, 41.90 MiB/s [2024-11-04T15:33:38.475Z] 10766.92 IOPS, 42.06 MiB/s [2024-11-04T15:33:38.475Z] 10807.15 IOPS, 42.22 MiB/s [2024-11-04T15:33:38.475Z] 10848.86 IOPS, 42.38 MiB/s [2024-11-04T15:33:38.475Z] 10866.87 IOPS, 42.45 MiB/s 00:22:11.651 Latency(us) 00:22:11.651 [2024-11-04T15:33:38.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:11.651 Verification LBA range: start 0x0 length 0x4000 00:22:11.651 NVMe0n1 : 15.01 10870.36 42.46 1586.80 0.00 10254.17 413.50 13856.18 00:22:11.651 [2024-11-04T15:33:38.475Z] =================================================================================================================== 00:22:11.651 [2024-11-04T15:33:38.475Z] Total : 10870.36 42.46 1586.80 0.00 10254.17 413.50 13856.18 00:22:11.651 Received shutdown signal, test time was about 15.000000 seconds 00:22:11.651 00:22:11.651 Latency(us) 00:22:11.651 [2024-11-04T15:33:38.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.651 [2024-11-04T15:33:38.475Z] =================================================================================================================== 00:22:11.651 [2024-11-04T15:33:38.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2910893 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2910893 /var/tmp/bdevperf.sock 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2910893 ']' 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.651 16:33:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.651 16:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.651 16:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:11.651 16:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:11.651 [2024-11-04 16:33:38.378921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:11.651 16:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:11.910 [2024-11-04 16:33:38.575488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:11.910 16:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:12.168 NVMe0n1 00:22:12.426 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:12.684 00:22:12.684 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:12.943 00:22:12.943 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.943 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:13.201 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.201 16:33:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:16.486 16:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:16.486 16:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:16.486 16:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2911812 00:22:16.486 16:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:16.486 16:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2911812 00:22:17.863 { 00:22:17.863 "results": [ 00:22:17.863 { 00:22:17.863 "job": "NVMe0n1", 00:22:17.863 "core_mask": "0x1", 00:22:17.863 "workload": "verify", 00:22:17.863 "status": "finished", 00:22:17.863 "verify_range": { 00:22:17.863 "start": 0, 00:22:17.863 "length": 16384 00:22:17.863 }, 00:22:17.863 "queue_depth": 128, 00:22:17.863 "io_size": 4096, 00:22:17.863 "runtime": 1.009856, 00:22:17.863 "iops": 11302.60155903416, 00:22:17.863 "mibps": 44.150787339977185, 00:22:17.863 "io_failed": 0, 00:22:17.863 "io_timeout": 0, 00:22:17.863 "avg_latency_us": 11275.054521181171, 00:22:17.863 "min_latency_us": 1458.9561904761904, 00:22:17.863 "max_latency_us": 9487.11619047619 00:22:17.863 } 00:22:17.863 ], 00:22:17.863 "core_count": 1 00:22:17.863 } 00:22:17.863 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:17.863 [2024-11-04 16:33:38.013161] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:22:17.863 [2024-11-04 16:33:38.013212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910893 ] 00:22:17.863 [2024-11-04 16:33:38.075962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.863 [2024-11-04 16:33:38.113261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.863 [2024-11-04 16:33:39.950733] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:17.863 [2024-11-04 16:33:39.950777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.863 [2024-11-04 16:33:39.950789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.863 [2024-11-04 16:33:39.950798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.863 [2024-11-04 16:33:39.950805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.863 [2024-11-04 16:33:39.950813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.863 [2024-11-04 16:33:39.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.863 [2024-11-04 16:33:39.950828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.863 [2024-11-04 16:33:39.950835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.863 [2024-11-04 16:33:39.950842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:17.863 [2024-11-04 16:33:39.950866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:17.863 [2024-11-04 16:33:39.950880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4340 (9): Bad file descriptor 00:22:17.863 [2024-11-04 16:33:39.961374] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:17.863 Running I/O for 1 seconds... 00:22:17.863 11242.00 IOPS, 43.91 MiB/s 00:22:17.863 Latency(us) 00:22:17.863 [2024-11-04T15:33:44.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.863 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:17.863 Verification LBA range: start 0x0 length 0x4000 00:22:17.863 NVMe0n1 : 1.01 11302.60 44.15 0.00 0.00 11275.05 1458.96 9487.12 00:22:17.863 [2024-11-04T15:33:44.687Z] =================================================================================================================== 00:22:17.863 [2024-11-04T15:33:44.687Z] Total : 11302.60 44.15 0.00 0.00 11275.05 1458.96 9487.12 00:22:17.863 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.863 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:17.863 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.121 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.121 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:18.121 16:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.380 16:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:21.664 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2910893 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2910893 ']' 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2910893 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910893 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910893' 00:22:21.665 killing process with pid 2910893 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2910893 00:22:21.665 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2910893 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.923 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.923 rmmod nvme_tcp 00:22:21.923 rmmod nvme_fabrics 00:22:21.923 rmmod nvme_keyring 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2907980 ']' 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2907980 ']' 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907980' 00:22:22.182 killing process with pid 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2907980 00:22:22.182 16:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.182 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.441 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.441 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.441 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.441 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.441 16:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.344 00:22:24.344 real 0m36.711s 00:22:24.344 user 1m57.099s 00:22:24.344 sys 0m7.585s 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:24.344 ************************************ 00:22:24.344 END TEST nvmf_failover 00:22:24.344 ************************************ 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.344 ************************************ 00:22:24.344 START TEST nvmf_host_discovery 00:22:24.344 ************************************ 00:22:24.344 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:24.603 * Looking for test storage... 00:22:24.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:24.603 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.604 --rc genhtml_branch_coverage=1 00:22:24.604 --rc genhtml_function_coverage=1 00:22:24.604 --rc genhtml_legend=1 00:22:24.604 --rc geninfo_all_blocks=1 00:22:24.604 --rc geninfo_unexecuted_blocks=1 00:22:24.604 00:22:24.604 ' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.604 --rc genhtml_branch_coverage=1 00:22:24.604 --rc genhtml_function_coverage=1 00:22:24.604 --rc genhtml_legend=1 00:22:24.604 --rc geninfo_all_blocks=1 00:22:24.604 --rc geninfo_unexecuted_blocks=1 00:22:24.604 00:22:24.604 ' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.604 --rc genhtml_branch_coverage=1 00:22:24.604 --rc genhtml_function_coverage=1 00:22:24.604 --rc genhtml_legend=1 00:22:24.604 --rc geninfo_all_blocks=1 00:22:24.604 --rc geninfo_unexecuted_blocks=1 00:22:24.604 00:22:24.604 ' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.604 --rc genhtml_branch_coverage=1 00:22:24.604 --rc genhtml_function_coverage=1 00:22:24.604 --rc genhtml_legend=1 00:22:24.604 --rc geninfo_all_blocks=1 00:22:24.604 --rc geninfo_unexecuted_blocks=1 00:22:24.604 00:22:24.604 ' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.604 16:33:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.175 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.175 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.175 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.176 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.176 16:33:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:22:31.176 00:22:31.176 --- 10.0.0.2 ping statistics --- 00:22:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.176 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:22:31.176 00:22:31.176 --- 10.0.0.1 ping statistics --- 00:22:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.176 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2916218 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2916218 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2916218 ']' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 [2024-11-04 16:33:57.239794] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:22:31.176 [2024-11-04 16:33:57.239846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.176 [2024-11-04 16:33:57.308181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.176 [2024-11-04 16:33:57.349895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.176 [2024-11-04 16:33:57.349926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.176 [2024-11-04 16:33:57.349935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.176 [2024-11-04 16:33:57.349942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.176 [2024-11-04 16:33:57.349948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.176 [2024-11-04 16:33:57.350541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 [2024-11-04 16:33:57.484914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 [2024-11-04 16:33:57.493125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 null0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 null1 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2916281 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2916281 /tmp/host.sock 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2916281 ']' 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:31.176 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 [2024-11-04 16:33:57.568307] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:22:31.176 [2024-11-04 16:33:57.568346] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916281 ] 00:22:31.176 [2024-11-04 16:33:57.629552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.176 [2024-11-04 16:33:57.669804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.176 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.177 16:33:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 [2024-11-04 16:33:58.090661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.437 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.438 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.697 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:31.697 16:33:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:32.263 [2024-11-04 16:33:58.791305] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:32.263 [2024-11-04 16:33:58.791324] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:32.263 [2024-11-04 16:33:58.791336] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:32.263 [2024-11-04 16:33:58.877591] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:32.263 [2024-11-04 16:33:58.973357] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:32.263 [2024-11-04 16:33:58.974130] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde4dd0:1 started. 00:22:32.263 [2024-11-04 16:33:58.975494] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:32.263 [2024-11-04 16:33:58.975509] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:32.263 [2024-11-04 16:33:58.980151] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde4dd0 was disconnected and freed. delete nvme_qpair. 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.780 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:33.039 [2024-11-04 16:33:59.705282] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde51a0:1 started. 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.039 [2024-11-04 16:33:59.753227] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde51a0 was disconnected and freed. delete nvme_qpair. 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:33.039 16:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.975 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.234 [2024-11-04 16:34:00.830340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.234 [2024-11-04 16:34:00.830986] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:34.234 [2024-11-04 16:34:00.831009] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.234 [2024-11-04 16:34:00.958372] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:34.234 16:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:34.493 [2024-11-04 16:34:01.061281] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:34.493 [2024-11-04 16:34:01.061317] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:34.493 [2024-11-04 16:34:01.061325] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:34.493 [2024-11-04 16:34:01.061330] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.474 16:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.474 [2024-11-04 16:34:02.086054] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:35.474 [2024-11-04 16:34:02.086075] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:35.474 [2024-11-04 16:34:02.092306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.474 [2024-11-04 16:34:02.092324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.474 [2024-11-04 16:34:02.092337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.474 [2024-11-04 16:34:02.092344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.474 [2024-11-04 16:34:02.092351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.474 [2024-11-04 16:34:02.092358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.474 [2024-11-04 16:34:02.092365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.474 [2024-11-04 16:34:02.092372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.474 [2024-11-04 16:34:02.092378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.474 [2024-11-04 16:34:02.102318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.474 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.474 [2024-11-04 16:34:02.112356] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.474 [2024-11-04 16:34:02.112367] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.474 [2024-11-04 16:34:02.112372] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.474 [2024-11-04 16:34:02.112377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.474 [2024-11-04 16:34:02.112393] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.474 [2024-11-04 16:34:02.112577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.474 [2024-11-04 16:34:02.112591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.474 [2024-11-04 16:34:02.112599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.474 [2024-11-04 16:34:02.112616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.474 [2024-11-04 16:34:02.112626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.474 [2024-11-04 16:34:02.112633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.474 [2024-11-04 16:34:02.112642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.474 [2024-11-04 16:34:02.112648] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.474 [2024-11-04 16:34:02.112652] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.474 [2024-11-04 16:34:02.112660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.474 [2024-11-04 16:34:02.122424] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.474 [2024-11-04 16:34:02.122435] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.475 [2024-11-04 16:34:02.122439] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.122443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.475 [2024-11-04 16:34:02.122458] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.475 [2024-11-04 16:34:02.122648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.475 [2024-11-04 16:34:02.122656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.475 [2024-11-04 16:34:02.122666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.475 [2024-11-04 16:34:02.122676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.475 [2024-11-04 16:34:02.122682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.475 [2024-11-04 16:34:02.122690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.475 [2024-11-04 16:34:02.122695] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.475 [2024-11-04 16:34:02.122700] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.475 [2024-11-04 16:34:02.122703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.475 [2024-11-04 16:34:02.132489] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.475 [2024-11-04 16:34:02.132502] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.475 [2024-11-04 16:34:02.132506] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.132510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.475 [2024-11-04 16:34:02.132524] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.132825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.475 [2024-11-04 16:34:02.132839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.475 [2024-11-04 16:34:02.132847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.475 [2024-11-04 16:34:02.132857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.475 [2024-11-04 16:34:02.132873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.475 [2024-11-04 16:34:02.132881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.475 [2024-11-04 16:34:02.132887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.475 [2024-11-04 16:34:02.132893] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.475 [2024-11-04 16:34:02.132901] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.475 [2024-11-04 16:34:02.132905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.475 [2024-11-04 16:34:02.142555] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.475 [2024-11-04 16:34:02.142568] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.475 [2024-11-04 16:34:02.142573] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.142577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.475 [2024-11-04 16:34:02.142589] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.475 [2024-11-04 16:34:02.142825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.475 [2024-11-04 16:34:02.142839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.475 [2024-11-04 16:34:02.142847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.475 [2024-11-04 16:34:02.142857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.475 [2024-11-04 16:34:02.142867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.475 [2024-11-04 16:34:02.142873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.475 [2024-11-04 16:34:02.142879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.475 [2024-11-04 16:34:02.142885] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.475 [2024-11-04 16:34:02.142889] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.475 [2024-11-04 16:34:02.142893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.475 [2024-11-04 16:34:02.152620] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.475 [2024-11-04 16:34:02.152634] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.475 [2024-11-04 16:34:02.152643] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.152647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.475 [2024-11-04 16:34:02.152662] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.152838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.475 [2024-11-04 16:34:02.152850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.475 [2024-11-04 16:34:02.152858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.475 [2024-11-04 16:34:02.152868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.475 [2024-11-04 16:34:02.152877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.475 [2024-11-04 16:34:02.152883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.475 [2024-11-04 16:34:02.152890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.475 [2024-11-04 16:34:02.152896] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.475 [2024-11-04 16:34:02.152900] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.475 [2024-11-04 16:34:02.152904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.475 [2024-11-04 16:34:02.162693] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.475 [2024-11-04 16:34:02.162703] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.475 [2024-11-04 16:34:02.162707] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.162711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.475 [2024-11-04 16:34:02.162724] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:35.475 [2024-11-04 16:34:02.162975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.475 [2024-11-04 16:34:02.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb5390 with addr=10.0.0.2, port=4420 00:22:35.475 [2024-11-04 16:34:02.162993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5390 is same with the state(6) to be set 00:22:35.475 [2024-11-04 16:34:02.163003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb5390 (9): Bad file descriptor 00:22:35.475 [2024-11-04 16:34:02.163012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:35.475 [2024-11-04 16:34:02.163019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:35.475 [2024-11-04 16:34:02.163025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:35.475 [2024-11-04 16:34:02.163031] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:35.475 [2024-11-04 16:34:02.163035] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:35.475 [2024-11-04 16:34:02.163039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.475 [2024-11-04 16:34:02.172629] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:35.475 [2024-11-04 16:34:02.172644] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:35.475 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.476 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.769 16:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.704 [2024-11-04 16:34:03.495080] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.704 [2024-11-04 16:34:03.495098] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.704 [2024-11-04 16:34:03.495109] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.961 [2024-11-04 16:34:03.583377] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:37.220 [2024-11-04 16:34:03.891804] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:37.220 [2024-11-04 16:34:03.892408] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xdb2960:1 started. 00:22:37.220 [2024-11-04 16:34:03.893990] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.220 [2024-11-04 16:34:03.894015] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 [2024-11-04 16:34:03.903174] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xdb2960 was disconnected and freed. delete nvme_qpair. 00:22:37.220 request: 00:22:37.220 { 00:22:37.220 "name": "nvme", 00:22:37.220 "trtype": "tcp", 00:22:37.220 "traddr": "10.0.0.2", 00:22:37.220 "adrfam": "ipv4", 00:22:37.220 "trsvcid": "8009", 00:22:37.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:37.220 "wait_for_attach": true, 00:22:37.220 "method": "bdev_nvme_start_discovery", 00:22:37.220 "req_id": 1 00:22:37.220 } 00:22:37.220 Got JSON-RPC error response 00:22:37.220 response: 00:22:37.220 { 00:22:37.220 "code": -17, 00:22:37.220 "message": "File exists" 00:22:37.220 } 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.220 16:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 request: 00:22:37.220 { 00:22:37.220 "name": "nvme_second", 00:22:37.220 "trtype": "tcp", 00:22:37.220 "traddr": "10.0.0.2", 00:22:37.220 "adrfam": "ipv4", 00:22:37.220 "trsvcid": "8009", 00:22:37.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:37.220 "wait_for_attach": true, 00:22:37.220 "method": "bdev_nvme_start_discovery", 00:22:37.220 "req_id": 1 00:22:37.220 } 00:22:37.220 Got JSON-RPC error response 00:22:37.220 response: 00:22:37.220 { 00:22:37.220 "code": -17, 00:22:37.220 "message": "File exists" 00:22:37.220 } 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:37.220 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.479 16:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.427 [2024-11-04 16:34:05.118668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.427 [2024-11-04 16:34:05.118697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5d90 with addr=10.0.0.2, port=8010 00:22:38.427 [2024-11-04 16:34:05.118713] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:38.427 [2024-11-04 16:34:05.118719] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:38.427 [2024-11-04 16:34:05.118726] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:39.362 [2024-11-04 16:34:06.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.362 [2024-11-04 16:34:06.121124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5d90 with addr=10.0.0.2, port=8010 00:22:39.362 [2024-11-04 16:34:06.121135] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:39.362 [2024-11-04 16:34:06.121141] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:39.362 [2024-11-04 16:34:06.121147] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:40.738 [2024-11-04 16:34:07.123337] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:40.738 request: 00:22:40.738 { 00:22:40.738 "name": "nvme_second", 00:22:40.738 "trtype": "tcp", 00:22:40.738 "traddr": "10.0.0.2", 00:22:40.738 "adrfam": "ipv4", 00:22:40.738 "trsvcid": "8010", 00:22:40.738 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:40.738 "wait_for_attach": false, 00:22:40.738 "attach_timeout_ms": 3000, 00:22:40.738 "method": "bdev_nvme_start_discovery", 00:22:40.738 "req_id": 1 00:22:40.738 } 00:22:40.738 Got JSON-RPC error response 00:22:40.738 response: 00:22:40.738 { 00:22:40.738 "code": -110, 00:22:40.738 "message": "Connection timed out" 00:22:40.738 } 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2916281 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.738 rmmod nvme_tcp 00:22:40.738 rmmod nvme_fabrics 00:22:40.738 rmmod nvme_keyring 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2916218 ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2916218 ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2916218' 00:22:40.738 killing process with pid 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2916218 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.738 16:34:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.271 00:22:43.271 real 0m18.373s 00:22:43.271 user 0m22.857s 00:22:43.271 sys 0m5.841s 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.271 ************************************ 00:22:43.271 END TEST nvmf_host_discovery 00:22:43.271 ************************************ 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.271 ************************************ 00:22:43.271 START TEST nvmf_host_multipath_status 00:22:43.271 ************************************ 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:43.271 * Looking for test storage... 00:22:43.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.271 --rc genhtml_branch_coverage=1 00:22:43.271 --rc genhtml_function_coverage=1 00:22:43.271 --rc genhtml_legend=1 00:22:43.271 --rc geninfo_all_blocks=1 00:22:43.271 --rc geninfo_unexecuted_blocks=1 00:22:43.271 00:22:43.271 ' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.271 --rc genhtml_branch_coverage=1 00:22:43.271 --rc genhtml_function_coverage=1 00:22:43.271 --rc genhtml_legend=1 00:22:43.271 --rc geninfo_all_blocks=1 00:22:43.271 --rc geninfo_unexecuted_blocks=1 00:22:43.271 00:22:43.271 ' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.271 --rc genhtml_branch_coverage=1 00:22:43.271 --rc genhtml_function_coverage=1 00:22:43.271 --rc genhtml_legend=1 00:22:43.271 --rc geninfo_all_blocks=1 00:22:43.271 --rc geninfo_unexecuted_blocks=1 00:22:43.271 00:22:43.271 ' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.271 --rc genhtml_branch_coverage=1 00:22:43.271 --rc genhtml_function_coverage=1 00:22:43.271 --rc genhtml_legend=1 00:22:43.271 --rc geninfo_all_blocks=1 00:22:43.271 --rc geninfo_unexecuted_blocks=1 00:22:43.271 00:22:43.271 ' 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.271 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.272 16:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.533 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.534 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.534 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.534 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:22:48.534 00:22:48.534 --- 10.0.0.2 ping statistics --- 00:22:48.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.534 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:22:48.534 00:22:48.534 --- 10.0.0.1 ping statistics --- 00:22:48.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.534 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2921366 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2921366 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2921366 ']' 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.534 [2024-11-04 16:34:14.758630] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:22:48.534 [2024-11-04 16:34:14.758675] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.534 [2024-11-04 16:34:14.823120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:48.534 [2024-11-04 16:34:14.864122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.534 [2024-11-04 16:34:14.864156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.534 [2024-11-04 16:34:14.864163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.534 [2024-11-04 16:34:14.864169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.534 [2024-11-04 16:34:14.864174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.534 [2024-11-04 16:34:14.865375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.534 [2024-11-04 16:34:14.865379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.534 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.535 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2921366 00:22:48.535 16:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:48.535 [2024-11-04 16:34:15.156291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.535 16:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:48.792 Malloc0 00:22:48.792 16:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:48.792 16:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.050 16:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.307 [2024-11-04 16:34:15.939801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.307 16:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:49.307 [2024-11-04 16:34:16.116243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2921615 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2921615 /var/tmp/bdevperf.sock 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2921615 ']' 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:49.565 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:49.822 16:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:50.386 Nvme0n1 00:22:50.386 16:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:50.643 Nvme0n1 00:22:50.643 16:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:50.643 16:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:53.169 16:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:53.169 16:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:53.169 16:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:53.169 16:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:54.103 16:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:54.103 16:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.103 16:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.103 16:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.360 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.360 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:54.360 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.360 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.618 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:54.878 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.878 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:54.878 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.878 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.136 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.136 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.136 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.136 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.393 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.393 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:55.393 16:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:55.393 16:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:55.650 16:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.021 16:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:57.278 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.278 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:57.278 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.278 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:57.535 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.535 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:57.535 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.535 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:57.792 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.792 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:57.792 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.792 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:58.049 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.049 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:58.049 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:58.049 16:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:58.306 16:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.678 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:59.935 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.935 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:59.935 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.936 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.194 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.194 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.194 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.194 16:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.452 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.452 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.452 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.452 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.711 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.711 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:00.711 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:00.711 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:00.970 16:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:01.905 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:01.905 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:01.905 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.905 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.163 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.163 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:02.163 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.163 16:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.421 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.421 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.421 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.421 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.679 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.679 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:02.679 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.679 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.948 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:03.208 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.208 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:03.208 16:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:03.465 16:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:03.722 16:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:04.654 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:04.654 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:04.654 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.654 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.912 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.175 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.175 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.175 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.175 16:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.433 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.433 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:05.433 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.433 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:05.690 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:05.948 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:06.206 16:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:07.139 16:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:07.139 16:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.139 16:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.139 16:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.397 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.397 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.397 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.397 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.654 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.654 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.654 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.654 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.912 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.912 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.912 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.912 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.169 16:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.427 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.427 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:08.684 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:08.684 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:08.942 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:08.943 16:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.318 16:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.577 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:10.835 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.835 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:10.835 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:10.835 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.093 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.093 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.093 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.093 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.351 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.351 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:11.351 16:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:11.351 16:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:11.609 16:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.981 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:13.238 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.238 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:13.238 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.238 16:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:13.496 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.496 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:13.496 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.496 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:13.754 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.754 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:13.754 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:13.754 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.010 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.010 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:14.010 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:14.010 16:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:14.267 16:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.638 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.896 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.896 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.896 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.896 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:16.154 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.154 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:16.154 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.154 16:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:16.413 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.413 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:16.413 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.413 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:16.672 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.672 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:16.672 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:16.672 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:16.929 16:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.302 16:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:18.302 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:18.302 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:18.302 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.302 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:18.560 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.560 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:18.560 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.560 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.818 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.818 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.818 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.818 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2921615 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2921615 ']' 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2921615 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:19.076 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2921615 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2921615' 00:23:19.338 killing process with pid 2921615 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2921615 00:23:19.338 16:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2921615 00:23:19.338 { 00:23:19.338 "results": [ 00:23:19.338 { 00:23:19.338 "job": "Nvme0n1", 00:23:19.338 "core_mask": "0x4", 00:23:19.338 "workload": "verify", 00:23:19.338 "status": "terminated", 00:23:19.338 "verify_range": { 00:23:19.338 "start": 0, 00:23:19.338 "length": 16384 00:23:19.338 }, 00:23:19.338 "queue_depth": 128, 00:23:19.338 "io_size": 4096, 00:23:19.338 "runtime": 28.436726, 00:23:19.338 "iops": 10548.471719283014, 00:23:19.338 "mibps": 41.20496765344927, 00:23:19.338 "io_failed": 0, 00:23:19.338 "io_timeout": 0, 00:23:19.338 "avg_latency_us": 12115.447685747686, 00:23:19.338 "min_latency_us": 721.6761904761905, 00:23:19.338 "max_latency_us": 3019898.88 00:23:19.338 } 00:23:19.338 ], 00:23:19.338 "core_count": 1 00:23:19.338 } 00:23:19.338 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2921615 00:23:19.338 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.338 [2024-11-04 16:34:16.180209] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:23:19.338 [2024-11-04 16:34:16.180259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921615 ] 00:23:19.338 [2024-11-04 16:34:16.238261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.338 [2024-11-04 16:34:16.278727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.338 Running I/O for 90 seconds... 00:23:19.338 11156.00 IOPS, 43.58 MiB/s [2024-11-04T15:34:46.162Z] 11335.00 IOPS, 44.28 MiB/s [2024-11-04T15:34:46.162Z] 11295.67 IOPS, 44.12 MiB/s [2024-11-04T15:34:46.162Z] 11325.00 IOPS, 44.24 MiB/s [2024-11-04T15:34:46.162Z] 11350.40 IOPS, 44.34 MiB/s [2024-11-04T15:34:46.162Z] 11390.67 IOPS, 44.49 MiB/s [2024-11-04T15:34:46.162Z] 11414.14 IOPS, 44.59 MiB/s [2024-11-04T15:34:46.162Z] 11419.62 IOPS, 44.61 MiB/s [2024-11-04T15:34:46.162Z] 11425.89 IOPS, 44.63 MiB/s [2024-11-04T15:34:46.162Z] 11415.20 IOPS, 44.59 MiB/s [2024-11-04T15:34:46.162Z] 11378.91 IOPS, 44.45 MiB/s [2024-11-04T15:34:46.162Z] 11379.75 IOPS, 44.45 MiB/s [2024-11-04T15:34:46.162Z] [2024-11-04 16:34:30.132901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.338 [2024-11-04 16:34:30.132943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.132979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.132988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.133287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.133295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.134013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.134028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.134043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.338 [2024-11-04 16:34:30.134050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.338 [2024-11-04 16:34:30.134064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.339 [2024-11-04 16:34:30.134909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.339 [2024-11-04 16:34:30.134923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.134932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.134945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.134953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.134967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.134988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.134994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.340 [2024-11-04 16:34:30.135850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.340 [2024-11-04 16:34:30.135856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.135988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.135995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:30.136584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.341 [2024-11-04 16:34:30.136591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.341 11040.62 IOPS, 43.13 MiB/s [2024-11-04T15:34:46.165Z] 10252.00 IOPS, 40.05 MiB/s [2024-11-04T15:34:46.165Z] 9568.53 IOPS, 37.38 MiB/s [2024-11-04T15:34:46.165Z] 9231.12 IOPS, 36.06 MiB/s [2024-11-04T15:34:46.165Z] 9356.71 IOPS, 36.55 MiB/s [2024-11-04T15:34:46.165Z] 9477.11 IOPS, 37.02 MiB/s [2024-11-04T15:34:46.165Z] 9679.26 IOPS, 37.81 MiB/s [2024-11-04T15:34:46.165Z] 9867.50 IOPS, 38.54 MiB/s [2024-11-04T15:34:46.165Z] 10010.14 IOPS, 39.10 MiB/s [2024-11-04T15:34:46.165Z] 10064.50 IOPS, 39.31 MiB/s [2024-11-04T15:34:46.165Z] 10126.65 IOPS, 39.56 MiB/s [2024-11-04T15:34:46.165Z] 10218.62 IOPS, 39.92 MiB/s [2024-11-04T15:34:46.165Z] 10346.44 IOPS, 40.42 MiB/s [2024-11-04T15:34:46.165Z] 10466.65 IOPS, 40.89 MiB/s [2024-11-04T15:34:46.165Z] [2024-11-04 16:34:43.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.341 [2024-11-04 16:34:43.657095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:43.657128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.341 [2024-11-04 16:34:43.657136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:43.657149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.341 [2024-11-04 16:34:43.657162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:43.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.341 [2024-11-04 16:34:43.657180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.341 [2024-11-04 16:34:43.657193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.341 [2024-11-04 16:34:43.657199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.342 [2024-11-04 16:34:43.657816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.342 [2024-11-04 16:34:43.657822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.657834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.657840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.657852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.657859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.343 [2024-11-04 16:34:43.658425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.343 [2024-11-04 16:34:43.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.343 [2024-11-04 16:34:43.658690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.343 10513.70 IOPS, 41.07 MiB/s [2024-11-04T15:34:46.167Z] 10547.68 IOPS, 41.20 MiB/s [2024-11-04T15:34:46.167Z] Received shutdown signal, test time was about 28.437360 seconds 00:23:19.343 00:23:19.343 Latency(us) 00:23:19.343 [2024-11-04T15:34:46.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.343 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:19.343 Verification LBA range: start 0x0 length 0x4000 00:23:19.343 Nvme0n1 : 28.44 10548.47 41.20 0.00 0.00 12115.45 721.68 3019898.88 00:23:19.343 [2024-11-04T15:34:46.167Z] =================================================================================================================== 00:23:19.343 [2024-11-04T15:34:46.167Z] Total : 10548.47 41.20 0.00 0.00 12115.45 721.68 3019898.88 00:23:19.343 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.601 rmmod nvme_tcp 00:23:19.601 rmmod nvme_fabrics 00:23:19.601 rmmod nvme_keyring 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2921366 ']' 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2921366 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2921366 ']' 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2921366 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2921366 00:23:19.601 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.602 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.602 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2921366' 00:23:19.602 killing process with pid 2921366 00:23:19.602 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2921366 00:23:19.602 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2921366 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:19.909 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.910 16:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.858 16:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.858 00:23:21.858 real 0m39.076s 00:23:21.858 user 1m47.569s 00:23:21.858 sys 0m10.765s 00:23:21.858 16:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.858 16:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:21.858 ************************************ 00:23:21.858 END TEST nvmf_host_multipath_status 00:23:21.858 ************************************ 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.117 ************************************ 00:23:22.117 START TEST nvmf_discovery_remove_ifc 00:23:22.117 ************************************ 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:22.117 * Looking for test storage... 00:23:22.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:22.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.117 --rc genhtml_branch_coverage=1 00:23:22.117 --rc genhtml_function_coverage=1 00:23:22.117 --rc genhtml_legend=1 00:23:22.117 --rc geninfo_all_blocks=1 00:23:22.117 --rc geninfo_unexecuted_blocks=1 00:23:22.117 00:23:22.117 ' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:22.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.117 --rc genhtml_branch_coverage=1 00:23:22.117 --rc genhtml_function_coverage=1 00:23:22.117 --rc genhtml_legend=1 00:23:22.117 --rc geninfo_all_blocks=1 00:23:22.117 --rc geninfo_unexecuted_blocks=1 00:23:22.117 00:23:22.117 ' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:22.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.117 --rc genhtml_branch_coverage=1 00:23:22.117 --rc genhtml_function_coverage=1 00:23:22.117 --rc genhtml_legend=1 00:23:22.117 --rc geninfo_all_blocks=1 00:23:22.117 --rc geninfo_unexecuted_blocks=1 00:23:22.117 00:23:22.117 ' 00:23:22.117 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:22.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.117 --rc genhtml_branch_coverage=1 00:23:22.117 --rc genhtml_function_coverage=1 00:23:22.117 --rc genhtml_legend=1 00:23:22.117 --rc geninfo_all_blocks=1 00:23:22.117 --rc geninfo_unexecuted_blocks=1 00:23:22.117 00:23:22.117 ' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.118 16:34:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.382 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.383 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.383 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.383 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:23:27.641 00:23:27.641 --- 10.0.0.2 ping statistics --- 00:23:27.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.641 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:27.641 00:23:27.641 --- 10.0.0.1 ping statistics --- 00:23:27.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.641 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2930154 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2930154 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2930154 ']' 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.641 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.642 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.642 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.642 [2024-11-04 16:34:54.412133] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:23:27.642 [2024-11-04 16:34:54.412173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.900 [2024-11-04 16:34:54.479422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.900 [2024-11-04 16:34:54.520350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.900 [2024-11-04 16:34:54.520385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.900 [2024-11-04 16:34:54.520393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.900 [2024-11-04 16:34:54.520403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.900 [2024-11-04 16:34:54.520408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.900 [2024-11-04 16:34:54.520970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.900 [2024-11-04 16:34:54.659470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.900 [2024-11-04 16:34:54.667666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:27.900 null0 00:23:27.900 [2024-11-04 16:34:54.699643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2930177 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2930177 /tmp/host.sock 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2930177 ']' 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:27.900 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.900 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.158 [2024-11-04 16:34:54.766175] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:23:28.158 [2024-11-04 16:34:54.766216] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930177 ] 00:23:28.158 [2024-11-04 16:34:54.836062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.158 [2024-11-04 16:34:54.882260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.158 16:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.416 16:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.416 16:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:28.416 16:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.416 16:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.350 [2024-11-04 16:34:56.066754] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.350 [2024-11-04 16:34:56.066776] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.350 [2024-11-04 16:34:56.066795] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.350 [2024-11-04 16:34:56.153061] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.608 [2024-11-04 16:34:56.336100] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:29.608 [2024-11-04 16:34:56.336942] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24e49f0:1 started. 00:23:29.608 [2024-11-04 16:34:56.338293] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.608 [2024-11-04 16:34:56.338331] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.608 [2024-11-04 16:34:56.338349] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.608 [2024-11-04 16:34:56.338362] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.608 [2024-11-04 16:34:56.338382] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.608 [2024-11-04 16:34:56.345128] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24e49f0 was disconnected and freed. delete nvme_qpair. 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:29.608 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.865 16:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.797 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.797 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.797 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.797 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.797 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.798 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.798 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.798 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.798 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.798 16:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.169 16:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.103 16:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.036 16:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.969 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.969 [2024-11-04 16:35:01.779782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:34.969 [2024-11-04 16:35:01.779824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.969 [2024-11-04 16:35:01.779836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.970 [2024-11-04 16:35:01.779847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.970 [2024-11-04 16:35:01.779854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.970 [2024-11-04 16:35:01.779861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.970 [2024-11-04 16:35:01.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.970 [2024-11-04 16:35:01.779878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.970 [2024-11-04 16:35:01.779884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.970 [2024-11-04 16:35:01.779891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.970 [2024-11-04 16:35:01.779898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.970 [2024-11-04 16:35:01.779904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c1220 is same with the state(6) to be set 00:23:34.970 [2024-11-04 16:35:01.789803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c1220 (9): Bad file descriptor 00:23:35.227 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.227 16:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.227 [2024-11-04 16:35:01.799843] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.227 [2024-11-04 16:35:01.799866] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.227 [2024-11-04 16:35:01.799871] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.227 [2024-11-04 16:35:01.799876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.227 [2024-11-04 16:35:01.799903] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.160 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.160 [2024-11-04 16:35:02.813644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:36.160 [2024-11-04 16:35:02.813690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c1220 with addr=10.0.0.2, port=4420 00:23:36.160 [2024-11-04 16:35:02.813708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c1220 is same with the state(6) to be set 00:23:36.160 [2024-11-04 16:35:02.813741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c1220 (9): Bad file descriptor 00:23:36.160 [2024-11-04 16:35:02.814198] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:36.160 [2024-11-04 16:35:02.814230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.160 [2024-11-04 16:35:02.814242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.161 [2024-11-04 16:35:02.814254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.161 [2024-11-04 16:35:02.814264] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.161 [2024-11-04 16:35:02.814272] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.161 [2024-11-04 16:35:02.814279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.161 [2024-11-04 16:35:02.814295] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.161 [2024-11-04 16:35:02.814302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.161 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.161 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.161 16:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.093 [2024-11-04 16:35:03.816781] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:37.093 [2024-11-04 16:35:03.816803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:37.093 [2024-11-04 16:35:03.816816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:37.093 [2024-11-04 16:35:03.816823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:37.093 [2024-11-04 16:35:03.816830] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:37.093 [2024-11-04 16:35:03.816837] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:37.093 [2024-11-04 16:35:03.816842] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:37.093 [2024-11-04 16:35:03.816846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:37.093 [2024-11-04 16:35:03.816867] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:37.093 [2024-11-04 16:35:03.816887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.093 [2024-11-04 16:35:03.816897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.093 [2024-11-04 16:35:03.816907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.093 [2024-11-04 16:35:03.816914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.093 [2024-11-04 16:35:03.816922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.093 [2024-11-04 16:35:03.816929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.093 [2024-11-04 16:35:03.816936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.093 [2024-11-04 16:35:03.816942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.093 [2024-11-04 16:35:03.816950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.093 [2024-11-04 16:35:03.816957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.093 [2024-11-04 16:35:03.816964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:37.093 [2024-11-04 16:35:03.816989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b0900 (9): Bad file descriptor 00:23:37.093 [2024-11-04 16:35:03.817987] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:37.093 [2024-11-04 16:35:03.817997] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.093 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.350 16:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.350 16:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:37.350 16:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:38.282 16:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.215 [2024-11-04 16:35:05.871752] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.215 [2024-11-04 16:35:05.871771] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.215 [2024-11-04 16:35:05.871785] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.215 [2024-11-04 16:35:05.960040] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:39.215 [2024-11-04 16:35:06.020654] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:39.215 [2024-11-04 16:35:06.021273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x24b5760:1 started. 00:23:39.215 [2024-11-04 16:35:06.022300] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:39.215 [2024-11-04 16:35:06.022333] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:39.215 [2024-11-04 16:35:06.022350] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:39.215 [2024-11-04 16:35:06.022364] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:39.215 [2024-11-04 16:35:06.022370] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.215 [2024-11-04 16:35:06.029894] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x24b5760 was disconnected and freed. delete nvme_qpair. 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2930177 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2930177 ']' 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2930177 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930177 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930177' 00:23:39.474 killing process with pid 2930177 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2930177 00:23:39.474 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2930177 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.733 rmmod nvme_tcp 00:23:39.733 rmmod nvme_fabrics 00:23:39.733 rmmod nvme_keyring 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2930154 ']' 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2930154 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2930154 ']' 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2930154 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930154 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930154' 00:23:39.733 killing process with pid 2930154 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2930154 00:23:39.733 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2930154 00:23:39.991 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.992 16:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.892 16:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.892 00:23:41.892 real 0m19.953s 00:23:41.892 user 0m24.506s 00:23:41.892 sys 0m5.430s 00:23:41.892 16:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.892 16:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.892 ************************************ 00:23:41.892 END TEST nvmf_discovery_remove_ifc 00:23:41.892 ************************************ 00:23:41.892 16:35:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.151 ************************************ 00:23:42.151 START TEST nvmf_identify_kernel_target 00:23:42.151 ************************************ 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.151 * Looking for test storage... 00:23:42.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.151 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.152 --rc genhtml_branch_coverage=1 00:23:42.152 --rc genhtml_function_coverage=1 00:23:42.152 --rc genhtml_legend=1 00:23:42.152 --rc geninfo_all_blocks=1 00:23:42.152 --rc geninfo_unexecuted_blocks=1 00:23:42.152 00:23:42.152 ' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.152 --rc genhtml_branch_coverage=1 00:23:42.152 --rc genhtml_function_coverage=1 00:23:42.152 --rc genhtml_legend=1 00:23:42.152 --rc geninfo_all_blocks=1 00:23:42.152 --rc geninfo_unexecuted_blocks=1 00:23:42.152 00:23:42.152 ' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.152 --rc genhtml_branch_coverage=1 00:23:42.152 --rc genhtml_function_coverage=1 00:23:42.152 --rc genhtml_legend=1 00:23:42.152 --rc geninfo_all_blocks=1 00:23:42.152 --rc geninfo_unexecuted_blocks=1 00:23:42.152 00:23:42.152 ' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.152 --rc genhtml_branch_coverage=1 00:23:42.152 --rc genhtml_function_coverage=1 00:23:42.152 --rc genhtml_legend=1 00:23:42.152 --rc geninfo_all_blocks=1 00:23:42.152 --rc geninfo_unexecuted_blocks=1 00:23:42.152 00:23:42.152 ' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.152 16:35:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.419 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:47.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:47.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:47.420 Found net devices under 0000:86:00.0: cvl_0_0 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:47.420 Found net devices under 0000:86:00.1: cvl_0_1 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.420 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:47.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:23:47.678 00:23:47.678 --- 10.0.0.2 ping statistics --- 00:23:47.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.678 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:47.678 00:23:47.678 --- 10.0.0.1 ping statistics --- 00:23:47.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.678 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.678 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:47.679 16:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:50.208 Waiting for block devices as requested 00:23:50.208 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:23:50.466 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:50.466 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:50.466 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:50.724 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:50.724 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:50.724 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:50.724 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:50.983 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:50.983 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:50.983 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:50.983 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:51.241 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:51.241 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:51.241 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:51.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:51.498 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:51.498 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:51.499 No valid GPT data, bailing 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:51.499 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:23:51.757 00:23:51.757 Discovery Log Number of Records 2, Generation counter 2 00:23:51.757 =====Discovery Log Entry 0====== 00:23:51.757 trtype: tcp 00:23:51.757 adrfam: ipv4 00:23:51.757 subtype: current discovery subsystem 00:23:51.757 treq: not specified, sq flow control disable supported 00:23:51.757 portid: 1 00:23:51.757 trsvcid: 4420 00:23:51.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:51.757 traddr: 10.0.0.1 00:23:51.757 eflags: none 00:23:51.757 sectype: none 00:23:51.757 =====Discovery Log Entry 1====== 00:23:51.757 trtype: tcp 00:23:51.757 adrfam: ipv4 00:23:51.757 subtype: nvme subsystem 00:23:51.757 treq: not specified, sq flow control disable supported 00:23:51.757 portid: 1 00:23:51.757 trsvcid: 4420 00:23:51.757 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:51.757 traddr: 10.0.0.1 00:23:51.757 eflags: none 00:23:51.757 sectype: none 00:23:51.757 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:51.757 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:51.757 ===================================================== 00:23:51.757 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:51.757 ===================================================== 00:23:51.757 Controller Capabilities/Features 00:23:51.757 ================================ 00:23:51.757 Vendor ID: 0000 00:23:51.757 Subsystem Vendor ID: 0000 00:23:51.757 Serial Number: 44bff4eb63da13c50b2e 00:23:51.757 Model Number: Linux 00:23:51.757 Firmware Version: 6.8.9-20 00:23:51.757 Recommended Arb Burst: 0 00:23:51.757 IEEE OUI Identifier: 00 00 00 00:23:51.757 Multi-path I/O 00:23:51.757 May have multiple subsystem ports: No 00:23:51.757 May have multiple controllers: No 00:23:51.757 Associated with SR-IOV VF: No 00:23:51.757 Max Data Transfer Size: Unlimited 00:23:51.757 Max Number of Namespaces: 0 00:23:51.757 Max Number of I/O Queues: 1024 00:23:51.757 NVMe Specification Version (VS): 1.3 00:23:51.757 NVMe Specification Version (Identify): 1.3 00:23:51.757 Maximum Queue Entries: 1024 00:23:51.757 Contiguous Queues Required: No 00:23:51.757 Arbitration Mechanisms Supported 00:23:51.757 Weighted Round Robin: Not Supported 00:23:51.757 Vendor Specific: Not Supported 00:23:51.757 Reset Timeout: 7500 ms 00:23:51.757 Doorbell Stride: 4 bytes 00:23:51.757 NVM Subsystem Reset: Not Supported 00:23:51.757 Command Sets Supported 00:23:51.757 NVM Command Set: Supported 00:23:51.757 Boot Partition: Not Supported 00:23:51.757 Memory Page Size Minimum: 4096 bytes 00:23:51.757 Memory Page Size Maximum: 4096 bytes 00:23:51.757 Persistent Memory Region: Not Supported 00:23:51.757 Optional Asynchronous Events Supported 00:23:51.757 Namespace Attribute Notices: Not Supported 00:23:51.757 Firmware Activation Notices: Not Supported 00:23:51.757 ANA Change Notices: Not Supported 00:23:51.757 PLE Aggregate Log Change Notices: Not Supported 00:23:51.757 LBA Status Info Alert Notices: Not Supported 00:23:51.757 EGE Aggregate Log Change Notices: Not Supported 00:23:51.757 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.757 Zone Descriptor Change Notices: Not Supported 00:23:51.757 Discovery Log Change Notices: Supported 00:23:51.757 Controller Attributes 00:23:51.757 128-bit Host Identifier: Not Supported 00:23:51.757 Non-Operational Permissive Mode: Not Supported 00:23:51.757 NVM Sets: Not Supported 00:23:51.757 Read Recovery Levels: Not Supported 00:23:51.757 Endurance Groups: Not Supported 00:23:51.757 Predictable Latency Mode: Not Supported 00:23:51.757 Traffic Based Keep ALive: Not Supported 00:23:51.757 Namespace Granularity: Not Supported 00:23:51.757 SQ Associations: Not Supported 00:23:51.757 UUID List: Not Supported 00:23:51.757 Multi-Domain Subsystem: Not Supported 00:23:51.757 Fixed Capacity Management: Not Supported 00:23:51.757 Variable Capacity Management: Not Supported 00:23:51.757 Delete Endurance Group: Not Supported 00:23:51.757 Delete NVM Set: Not Supported 00:23:51.757 Extended LBA Formats Supported: Not Supported 00:23:51.757 Flexible Data Placement Supported: Not Supported 00:23:51.757 00:23:51.757 Controller Memory Buffer Support 00:23:51.757 ================================ 00:23:51.757 Supported: No 00:23:51.757 00:23:51.757 Persistent Memory Region Support 00:23:51.757 ================================ 00:23:51.757 Supported: No 00:23:51.757 00:23:51.757 Admin Command Set Attributes 00:23:51.757 ============================ 00:23:51.757 Security Send/Receive: Not Supported 00:23:51.757 Format NVM: Not Supported 00:23:51.757 Firmware Activate/Download: Not Supported 00:23:51.757 Namespace Management: Not Supported 00:23:51.757 Device Self-Test: Not Supported 00:23:51.757 Directives: Not Supported 00:23:51.757 NVMe-MI: Not Supported 00:23:51.757 Virtualization Management: Not Supported 00:23:51.757 Doorbell Buffer Config: Not Supported 00:23:51.757 Get LBA Status Capability: Not Supported 00:23:51.757 Command & Feature Lockdown Capability: Not Supported 00:23:51.757 Abort Command Limit: 1 00:23:51.757 Async Event Request Limit: 1 00:23:51.757 Number of Firmware Slots: N/A 00:23:51.757 Firmware Slot 1 Read-Only: N/A 00:23:51.757 Firmware Activation Without Reset: N/A 00:23:51.757 Multiple Update Detection Support: N/A 00:23:51.757 Firmware Update Granularity: No Information Provided 00:23:51.757 Per-Namespace SMART Log: No 00:23:51.757 Asymmetric Namespace Access Log Page: Not Supported 00:23:51.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:51.757 Command Effects Log Page: Not Supported 00:23:51.757 Get Log Page Extended Data: Supported 00:23:51.757 Telemetry Log Pages: Not Supported 00:23:51.757 Persistent Event Log Pages: Not Supported 00:23:51.757 Supported Log Pages Log Page: May Support 00:23:51.757 Commands Supported & Effects Log Page: Not Supported 00:23:51.757 Feature Identifiers & Effects Log Page:May Support 00:23:51.757 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.757 Data Area 4 for Telemetry Log: Not Supported 00:23:51.757 Error Log Page Entries Supported: 1 00:23:51.757 Keep Alive: Not Supported 00:23:51.757 00:23:51.757 NVM Command Set Attributes 00:23:51.757 ========================== 00:23:51.757 Submission Queue Entry Size 00:23:51.757 Max: 1 00:23:51.757 Min: 1 00:23:51.757 Completion Queue Entry Size 00:23:51.757 Max: 1 00:23:51.757 Min: 1 00:23:51.757 Number of Namespaces: 0 00:23:51.757 Compare Command: Not Supported 00:23:51.757 Write Uncorrectable Command: Not Supported 00:23:51.757 Dataset Management Command: Not Supported 00:23:51.757 Write Zeroes Command: Not Supported 00:23:51.757 Set Features Save Field: Not Supported 00:23:51.757 Reservations: Not Supported 00:23:51.757 Timestamp: Not Supported 00:23:51.757 Copy: Not Supported 00:23:51.757 Volatile Write Cache: Not Present 00:23:51.757 Atomic Write Unit (Normal): 1 00:23:51.757 Atomic Write Unit (PFail): 1 00:23:51.757 Atomic Compare & Write Unit: 1 00:23:51.757 Fused Compare & Write: Not Supported 00:23:51.757 Scatter-Gather List 00:23:51.757 SGL Command Set: Supported 00:23:51.757 SGL Keyed: Not Supported 00:23:51.757 SGL Bit Bucket Descriptor: Not Supported 00:23:51.757 SGL Metadata Pointer: Not Supported 00:23:51.758 Oversized SGL: Not Supported 00:23:51.758 SGL Metadata Address: Not Supported 00:23:51.758 SGL Offset: Supported 00:23:51.758 Transport SGL Data Block: Not Supported 00:23:51.758 Replay Protected Memory Block: Not Supported 00:23:51.758 00:23:51.758 Firmware Slot Information 00:23:51.758 ========================= 00:23:51.758 Active slot: 0 00:23:51.758 00:23:51.758 00:23:51.758 Error Log 00:23:51.758 ========= 00:23:51.758 00:23:51.758 Active Namespaces 00:23:51.758 ================= 00:23:51.758 Discovery Log Page 00:23:51.758 ================== 00:23:51.758 Generation Counter: 2 00:23:51.758 Number of Records: 2 00:23:51.758 Record Format: 0 00:23:51.758 00:23:51.758 Discovery Log Entry 0 00:23:51.758 ---------------------- 00:23:51.758 Transport Type: 3 (TCP) 00:23:51.758 Address Family: 1 (IPv4) 00:23:51.758 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:51.758 Entry Flags: 00:23:51.758 Duplicate Returned Information: 0 00:23:51.758 Explicit Persistent Connection Support for Discovery: 0 00:23:51.758 Transport Requirements: 00:23:51.758 Secure Channel: Not Specified 00:23:51.758 Port ID: 1 (0x0001) 00:23:51.758 Controller ID: 65535 (0xffff) 00:23:51.758 Admin Max SQ Size: 32 00:23:51.758 Transport Service Identifier: 4420 00:23:51.758 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:51.758 Transport Address: 10.0.0.1 00:23:51.758 Discovery Log Entry 1 00:23:51.758 ---------------------- 00:23:51.758 Transport Type: 3 (TCP) 00:23:51.758 Address Family: 1 (IPv4) 00:23:51.758 Subsystem Type: 2 (NVM Subsystem) 00:23:51.758 Entry Flags: 00:23:51.758 Duplicate Returned Information: 0 00:23:51.758 Explicit Persistent Connection Support for Discovery: 0 00:23:51.758 Transport Requirements: 00:23:51.758 Secure Channel: Not Specified 00:23:51.758 Port ID: 1 (0x0001) 00:23:51.758 Controller ID: 65535 (0xffff) 00:23:51.758 Admin Max SQ Size: 32 00:23:51.758 Transport Service Identifier: 4420 00:23:51.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:51.758 Transport Address: 10.0.0.1 00:23:51.758 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:52.017 get_feature(0x01) failed 00:23:52.017 get_feature(0x02) failed 00:23:52.017 get_feature(0x04) failed 00:23:52.017 ===================================================== 00:23:52.017 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:52.017 ===================================================== 00:23:52.017 Controller Capabilities/Features 00:23:52.017 ================================ 00:23:52.017 Vendor ID: 0000 00:23:52.017 Subsystem Vendor ID: 0000 00:23:52.017 Serial Number: dd6cf34254761e66da1e 00:23:52.017 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:52.017 Firmware Version: 6.8.9-20 00:23:52.017 Recommended Arb Burst: 6 00:23:52.017 IEEE OUI Identifier: 00 00 00 00:23:52.017 Multi-path I/O 00:23:52.017 May have multiple subsystem ports: Yes 00:23:52.017 May have multiple controllers: Yes 00:23:52.017 Associated with SR-IOV VF: No 00:23:52.017 Max Data Transfer Size: Unlimited 00:23:52.017 Max Number of Namespaces: 1024 00:23:52.017 Max Number of I/O Queues: 128 00:23:52.017 NVMe Specification Version (VS): 1.3 00:23:52.017 NVMe Specification Version (Identify): 1.3 00:23:52.017 Maximum Queue Entries: 1024 00:23:52.017 Contiguous Queues Required: No 00:23:52.017 Arbitration Mechanisms Supported 00:23:52.017 Weighted Round Robin: Not Supported 00:23:52.017 Vendor Specific: Not Supported 00:23:52.017 Reset Timeout: 7500 ms 00:23:52.017 Doorbell Stride: 4 bytes 00:23:52.017 NVM Subsystem Reset: Not Supported 00:23:52.017 Command Sets Supported 00:23:52.017 NVM Command Set: Supported 00:23:52.017 Boot Partition: Not Supported 00:23:52.017 Memory Page Size Minimum: 4096 bytes 00:23:52.017 Memory Page Size Maximum: 4096 bytes 00:23:52.017 Persistent Memory Region: Not Supported 00:23:52.017 Optional Asynchronous Events Supported 00:23:52.017 Namespace Attribute Notices: Supported 00:23:52.017 Firmware Activation Notices: Not Supported 00:23:52.017 ANA Change Notices: Supported 00:23:52.017 PLE Aggregate Log Change Notices: Not Supported 00:23:52.017 LBA Status Info Alert Notices: Not Supported 00:23:52.017 EGE Aggregate Log Change Notices: Not Supported 00:23:52.017 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.017 Zone Descriptor Change Notices: Not Supported 00:23:52.017 Discovery Log Change Notices: Not Supported 00:23:52.017 Controller Attributes 00:23:52.017 128-bit Host Identifier: Supported 00:23:52.017 Non-Operational Permissive Mode: Not Supported 00:23:52.017 NVM Sets: Not Supported 00:23:52.017 Read Recovery Levels: Not Supported 00:23:52.017 Endurance Groups: Not Supported 00:23:52.017 Predictable Latency Mode: Not Supported 00:23:52.017 Traffic Based Keep ALive: Supported 00:23:52.017 Namespace Granularity: Not Supported 00:23:52.017 SQ Associations: Not Supported 00:23:52.017 UUID List: Not Supported 00:23:52.017 Multi-Domain Subsystem: Not Supported 00:23:52.017 Fixed Capacity Management: Not Supported 00:23:52.017 Variable Capacity Management: Not Supported 00:23:52.017 Delete Endurance Group: Not Supported 00:23:52.017 Delete NVM Set: Not Supported 00:23:52.017 Extended LBA Formats Supported: Not Supported 00:23:52.017 Flexible Data Placement Supported: Not Supported 00:23:52.017 00:23:52.017 Controller Memory Buffer Support 00:23:52.017 ================================ 00:23:52.017 Supported: No 00:23:52.017 00:23:52.017 Persistent Memory Region Support 00:23:52.017 ================================ 00:23:52.017 Supported: No 00:23:52.017 00:23:52.017 Admin Command Set Attributes 00:23:52.017 ============================ 00:23:52.017 Security Send/Receive: Not Supported 00:23:52.017 Format NVM: Not Supported 00:23:52.017 Firmware Activate/Download: Not Supported 00:23:52.017 Namespace Management: Not Supported 00:23:52.017 Device Self-Test: Not Supported 00:23:52.017 Directives: Not Supported 00:23:52.017 NVMe-MI: Not Supported 00:23:52.017 Virtualization Management: Not Supported 00:23:52.017 Doorbell Buffer Config: Not Supported 00:23:52.017 Get LBA Status Capability: Not Supported 00:23:52.017 Command & Feature Lockdown Capability: Not Supported 00:23:52.017 Abort Command Limit: 4 00:23:52.017 Async Event Request Limit: 4 00:23:52.017 Number of Firmware Slots: N/A 00:23:52.017 Firmware Slot 1 Read-Only: N/A 00:23:52.017 Firmware Activation Without Reset: N/A 00:23:52.017 Multiple Update Detection Support: N/A 00:23:52.017 Firmware Update Granularity: No Information Provided 00:23:52.017 Per-Namespace SMART Log: Yes 00:23:52.017 Asymmetric Namespace Access Log Page: Supported 00:23:52.017 ANA Transition Time : 10 sec 00:23:52.017 00:23:52.017 Asymmetric Namespace Access Capabilities 00:23:52.017 ANA Optimized State : Supported 00:23:52.017 ANA Non-Optimized State : Supported 00:23:52.017 ANA Inaccessible State : Supported 00:23:52.017 ANA Persistent Loss State : Supported 00:23:52.017 ANA Change State : Supported 00:23:52.017 ANAGRPID is not changed : No 00:23:52.017 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:52.017 00:23:52.017 ANA Group Identifier Maximum : 128 00:23:52.017 Number of ANA Group Identifiers : 128 00:23:52.017 Max Number of Allowed Namespaces : 1024 00:23:52.017 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:52.017 Command Effects Log Page: Supported 00:23:52.017 Get Log Page Extended Data: Supported 00:23:52.017 Telemetry Log Pages: Not Supported 00:23:52.017 Persistent Event Log Pages: Not Supported 00:23:52.017 Supported Log Pages Log Page: May Support 00:23:52.018 Commands Supported & Effects Log Page: Not Supported 00:23:52.018 Feature Identifiers & Effects Log Page:May Support 00:23:52.018 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.018 Data Area 4 for Telemetry Log: Not Supported 00:23:52.018 Error Log Page Entries Supported: 128 00:23:52.018 Keep Alive: Supported 00:23:52.018 Keep Alive Granularity: 1000 ms 00:23:52.018 00:23:52.018 NVM Command Set Attributes 00:23:52.018 ========================== 00:23:52.018 Submission Queue Entry Size 00:23:52.018 Max: 64 00:23:52.018 Min: 64 00:23:52.018 Completion Queue Entry Size 00:23:52.018 Max: 16 00:23:52.018 Min: 16 00:23:52.018 Number of Namespaces: 1024 00:23:52.018 Compare Command: Not Supported 00:23:52.018 Write Uncorrectable Command: Not Supported 00:23:52.018 Dataset Management Command: Supported 00:23:52.018 Write Zeroes Command: Supported 00:23:52.018 Set Features Save Field: Not Supported 00:23:52.018 Reservations: Not Supported 00:23:52.018 Timestamp: Not Supported 00:23:52.018 Copy: Not Supported 00:23:52.018 Volatile Write Cache: Present 00:23:52.018 Atomic Write Unit (Normal): 1 00:23:52.018 Atomic Write Unit (PFail): 1 00:23:52.018 Atomic Compare & Write Unit: 1 00:23:52.018 Fused Compare & Write: Not Supported 00:23:52.018 Scatter-Gather List 00:23:52.018 SGL Command Set: Supported 00:23:52.018 SGL Keyed: Not Supported 00:23:52.018 SGL Bit Bucket Descriptor: Not Supported 00:23:52.018 SGL Metadata Pointer: Not Supported 00:23:52.018 Oversized SGL: Not Supported 00:23:52.018 SGL Metadata Address: Not Supported 00:23:52.018 SGL Offset: Supported 00:23:52.018 Transport SGL Data Block: Not Supported 00:23:52.018 Replay Protected Memory Block: Not Supported 00:23:52.018 00:23:52.018 Firmware Slot Information 00:23:52.018 ========================= 00:23:52.018 Active slot: 0 00:23:52.018 00:23:52.018 Asymmetric Namespace Access 00:23:52.018 =========================== 00:23:52.018 Change Count : 0 00:23:52.018 Number of ANA Group Descriptors : 1 00:23:52.018 ANA Group Descriptor : 0 00:23:52.018 ANA Group ID : 1 00:23:52.018 Number of NSID Values : 1 00:23:52.018 Change Count : 0 00:23:52.018 ANA State : 1 00:23:52.018 Namespace Identifier : 1 00:23:52.018 00:23:52.018 Commands Supported and Effects 00:23:52.018 ============================== 00:23:52.018 Admin Commands 00:23:52.018 -------------- 00:23:52.018 Get Log Page (02h): Supported 00:23:52.018 Identify (06h): Supported 00:23:52.018 Abort (08h): Supported 00:23:52.018 Set Features (09h): Supported 00:23:52.018 Get Features (0Ah): Supported 00:23:52.018 Asynchronous Event Request (0Ch): Supported 00:23:52.018 Keep Alive (18h): Supported 00:23:52.018 I/O Commands 00:23:52.018 ------------ 00:23:52.018 Flush (00h): Supported 00:23:52.018 Write (01h): Supported LBA-Change 00:23:52.018 Read (02h): Supported 00:23:52.018 Write Zeroes (08h): Supported LBA-Change 00:23:52.018 Dataset Management (09h): Supported 00:23:52.018 00:23:52.018 Error Log 00:23:52.018 ========= 00:23:52.018 Entry: 0 00:23:52.018 Error Count: 0x3 00:23:52.018 Submission Queue Id: 0x0 00:23:52.018 Command Id: 0x5 00:23:52.018 Phase Bit: 0 00:23:52.018 Status Code: 0x2 00:23:52.018 Status Code Type: 0x0 00:23:52.018 Do Not Retry: 1 00:23:52.018 Error Location: 0x28 00:23:52.018 LBA: 0x0 00:23:52.018 Namespace: 0x0 00:23:52.018 Vendor Log Page: 0x0 00:23:52.018 ----------- 00:23:52.018 Entry: 1 00:23:52.018 Error Count: 0x2 00:23:52.018 Submission Queue Id: 0x0 00:23:52.018 Command Id: 0x5 00:23:52.018 Phase Bit: 0 00:23:52.018 Status Code: 0x2 00:23:52.018 Status Code Type: 0x0 00:23:52.018 Do Not Retry: 1 00:23:52.018 Error Location: 0x28 00:23:52.018 LBA: 0x0 00:23:52.018 Namespace: 0x0 00:23:52.018 Vendor Log Page: 0x0 00:23:52.018 ----------- 00:23:52.018 Entry: 2 00:23:52.018 Error Count: 0x1 00:23:52.018 Submission Queue Id: 0x0 00:23:52.018 Command Id: 0x4 00:23:52.018 Phase Bit: 0 00:23:52.018 Status Code: 0x2 00:23:52.018 Status Code Type: 0x0 00:23:52.018 Do Not Retry: 1 00:23:52.018 Error Location: 0x28 00:23:52.018 LBA: 0x0 00:23:52.018 Namespace: 0x0 00:23:52.018 Vendor Log Page: 0x0 00:23:52.018 00:23:52.018 Number of Queues 00:23:52.018 ================ 00:23:52.018 Number of I/O Submission Queues: 128 00:23:52.018 Number of I/O Completion Queues: 128 00:23:52.018 00:23:52.018 ZNS Specific Controller Data 00:23:52.018 ============================ 00:23:52.018 Zone Append Size Limit: 0 00:23:52.018 00:23:52.018 00:23:52.018 Active Namespaces 00:23:52.018 ================= 00:23:52.018 get_feature(0x05) failed 00:23:52.018 Namespace ID:1 00:23:52.018 Command Set Identifier: NVM (00h) 00:23:52.018 Deallocate: Supported 00:23:52.018 Deallocated/Unwritten Error: Not Supported 00:23:52.018 Deallocated Read Value: Unknown 00:23:52.018 Deallocate in Write Zeroes: Not Supported 00:23:52.018 Deallocated Guard Field: 0xFFFF 00:23:52.018 Flush: Supported 00:23:52.018 Reservation: Not Supported 00:23:52.018 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.018 Size (in LBAs): 3125627568 (1490GiB) 00:23:52.018 Capacity (in LBAs): 3125627568 (1490GiB) 00:23:52.018 Utilization (in LBAs): 3125627568 (1490GiB) 00:23:52.018 UUID: 891682c6-1fdf-4d07-8209-9476417e7208 00:23:52.018 Thin Provisioning: Not Supported 00:23:52.018 Per-NS Atomic Units: Yes 00:23:52.018 Atomic Boundary Size (Normal): 0 00:23:52.018 Atomic Boundary Size (PFail): 0 00:23:52.018 Atomic Boundary Offset: 0 00:23:52.018 NGUID/EUI64 Never Reused: No 00:23:52.018 ANA group ID: 1 00:23:52.018 Namespace Write Protected: No 00:23:52.018 Number of LBA Formats: 1 00:23:52.018 Current LBA Format: LBA Format #00 00:23:52.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.018 00:23:52.018 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.019 rmmod nvme_tcp 00:23:52.019 rmmod nvme_fabrics 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.019 16:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.920 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.920 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:53.920 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:53.920 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:53.920 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:54.177 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.704 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:56.704 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:58.080 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:23:58.338 00:23:58.338 real 0m16.285s 00:23:58.338 user 0m3.963s 00:23:58.338 sys 0m8.079s 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.338 ************************************ 00:23:58.338 END TEST nvmf_identify_kernel_target 00:23:58.338 ************************************ 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.338 ************************************ 00:23:58.338 START TEST nvmf_auth_host 00:23:58.338 ************************************ 00:23:58.338 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.596 * Looking for test storage... 00:23:58.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.597 --rc genhtml_branch_coverage=1 00:23:58.597 --rc genhtml_function_coverage=1 00:23:58.597 --rc genhtml_legend=1 00:23:58.597 --rc geninfo_all_blocks=1 00:23:58.597 --rc geninfo_unexecuted_blocks=1 00:23:58.597 00:23:58.597 ' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.597 --rc genhtml_branch_coverage=1 00:23:58.597 --rc genhtml_function_coverage=1 00:23:58.597 --rc genhtml_legend=1 00:23:58.597 --rc geninfo_all_blocks=1 00:23:58.597 --rc geninfo_unexecuted_blocks=1 00:23:58.597 00:23:58.597 ' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.597 --rc genhtml_branch_coverage=1 00:23:58.597 --rc genhtml_function_coverage=1 00:23:58.597 --rc genhtml_legend=1 00:23:58.597 --rc geninfo_all_blocks=1 00:23:58.597 --rc geninfo_unexecuted_blocks=1 00:23:58.597 00:23:58.597 ' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.597 --rc genhtml_branch_coverage=1 00:23:58.597 --rc genhtml_function_coverage=1 00:23:58.597 --rc genhtml_legend=1 00:23:58.597 --rc geninfo_all_blocks=1 00:23:58.597 --rc geninfo_unexecuted_blocks=1 00:23:58.597 00:23:58.597 ' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:58.597 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.598 16:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.160 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.160 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.160 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.160 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.160 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.161 16:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:24:05.161 00:24:05.161 --- 10.0.0.2 ping statistics --- 00:24:05.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.161 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:05.161 00:24:05.161 --- 10.0.0.1 ping statistics --- 00:24:05.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.161 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2941953 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2941953 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2941953 ']' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4688d6c08689d5c07861d1535cd983c7 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.257 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4688d6c08689d5c07861d1535cd983c7 0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4688d6c08689d5c07861d1535cd983c7 0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4688d6c08689d5c07861d1535cd983c7 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.257 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.257 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.257 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7158bbb1c303af323898483b47921cc4f79d256e940abaecd3e5ebf613fc85c2 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.h1o 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7158bbb1c303af323898483b47921cc4f79d256e940abaecd3e5ebf613fc85c2 3 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7158bbb1c303af323898483b47921cc4f79d256e940abaecd3e5ebf613fc85c2 3 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7158bbb1c303af323898483b47921cc4f79d256e940abaecd3e5ebf613fc85c2 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.h1o 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.h1o 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.h1o 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.161 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=435cc7a9379ccb8a0b4419c94da8331859cb90a33b1404b3 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yw4 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 435cc7a9379ccb8a0b4419c94da8331859cb90a33b1404b3 0 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 435cc7a9379ccb8a0b4419c94da8331859cb90a33b1404b3 0 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=435cc7a9379ccb8a0b4419c94da8331859cb90a33b1404b3 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yw4 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yw4 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yw4 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=59fbc501e632609f8203501e014687d45b64cc9dacfa036b 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TiA 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 59fbc501e632609f8203501e014687d45b64cc9dacfa036b 2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 59fbc501e632609f8203501e014687d45b64cc9dacfa036b 2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=59fbc501e632609f8203501e014687d45b64cc9dacfa036b 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TiA 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TiA 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.TiA 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b74288b71791c423000bae02a0bf2e8 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IoW 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b74288b71791c423000bae02a0bf2e8 1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b74288b71791c423000bae02a0bf2e8 1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b74288b71791c423000bae02a0bf2e8 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IoW 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IoW 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.IoW 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a07e9e3d08b4eaa9e9d8c365ff6fc1d 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rEd 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a07e9e3d08b4eaa9e9d8c365ff6fc1d 1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a07e9e3d08b4eaa9e9d8c365ff6fc1d 1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a07e9e3d08b4eaa9e9d8c365ff6fc1d 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rEd 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rEd 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rEd 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26502d73875b150b292d7cc40a85e5c11000ad465274d374 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lB6 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26502d73875b150b292d7cc40a85e5c11000ad465274d374 2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26502d73875b150b292d7cc40a85e5c11000ad465274d374 2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26502d73875b150b292d7cc40a85e5c11000ad465274d374 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lB6 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lB6 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lB6 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.162 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8f74d761193092797b1b0a741c6ab7c3 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RKj 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8f74d761193092797b1b0a741c6ab7c3 0 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8f74d761193092797b1b0a741c6ab7c3 0 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8f74d761193092797b1b0a741c6ab7c3 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RKj 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RKj 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RKj 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:05.163 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa8c2743628ca40b0a414551cc90ce26dc47f06f1cfe9d7eacb210ffed04ab6b 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I2L 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa8c2743628ca40b0a414551cc90ce26dc47f06f1cfe9d7eacb210ffed04ab6b 3 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa8c2743628ca40b0a414551cc90ce26dc47f06f1cfe9d7eacb210ffed04ab6b 3 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa8c2743628ca40b0a414551cc90ce26dc47f06f1cfe9d7eacb210ffed04ab6b 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:05.421 16:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I2L 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I2L 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.I2L 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2941953 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2941953 ']' 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.257 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.421 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.h1o ]] 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.h1o 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yw4 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.TiA ]] 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TiA 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.679 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IoW 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rEd ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rEd 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lB6 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RKj ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RKj 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.I2L 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:05.680 16:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:08.206 Waiting for block devices as requested 00:24:08.206 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:08.464 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:08.464 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:08.464 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:08.464 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:08.722 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:08.722 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:08.722 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:08.722 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:08.979 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:08.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:08.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:09.237 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:09.237 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:09.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:09.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:09.494 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:10.059 No valid GPT data, bailing 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:10.059 00:24:10.059 Discovery Log Number of Records 2, Generation counter 2 00:24:10.059 =====Discovery Log Entry 0====== 00:24:10.059 trtype: tcp 00:24:10.059 adrfam: ipv4 00:24:10.059 subtype: current discovery subsystem 00:24:10.059 treq: not specified, sq flow control disable supported 00:24:10.059 portid: 1 00:24:10.059 trsvcid: 4420 00:24:10.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:10.059 traddr: 10.0.0.1 00:24:10.059 eflags: none 00:24:10.059 sectype: none 00:24:10.059 =====Discovery Log Entry 1====== 00:24:10.059 trtype: tcp 00:24:10.059 adrfam: ipv4 00:24:10.059 subtype: nvme subsystem 00:24:10.059 treq: not specified, sq flow control disable supported 00:24:10.059 portid: 1 00:24:10.059 trsvcid: 4420 00:24:10.059 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:10.059 traddr: 10.0.0.1 00:24:10.059 eflags: none 00:24:10.059 sectype: none 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:10.059 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.318 16:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.318 nvme0n1 00:24:10.318 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.318 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.318 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.318 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.319 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.577 nvme0n1 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.577 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.578 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.836 nvme0n1 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.836 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.837 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.095 nvme0n1 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.095 nvme0n1 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.095 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.354 16:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 nvme0n1 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.354 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.613 nvme0n1 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.613 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.871 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.871 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.871 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:11.871 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.871 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 nvme0n1 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.872 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.130 nvme0n1 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.130 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.131 16:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.389 nvme0n1 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.389 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.390 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.648 nvme0n1 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.648 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.649 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.944 nvme0n1 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.944 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.204 16:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.204 nvme0n1 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.204 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.515 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.516 nvme0n1 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.516 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.773 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.031 nvme0n1 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.031 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.288 nvme0n1 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.288 16:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.288 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.289 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.853 nvme0n1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.853 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.110 nvme0n1 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.110 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:15.111 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.111 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.111 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.368 16:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.625 nvme0n1 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.625 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.626 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.190 nvme0n1 00:24:16.190 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.190 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.190 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.191 16:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.448 nvme0n1 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.448 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.706 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.707 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 nvme0n1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.273 16:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.840 nvme0n1 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.840 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.841 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.841 16:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 nvme0n1 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.407 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.665 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.231 nvme0n1 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.231 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.232 16:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.797 nvme0n1 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.797 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.055 nvme0n1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.055 nvme0n1 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.055 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.056 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.056 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.314 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.315 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.315 16:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.315 nvme0n1 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.315 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.573 nvme0n1 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.573 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.831 nvme0n1 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.831 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.832 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.090 nvme0n1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.090 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.349 nvme0n1 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.349 16:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.349 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.607 nvme0n1 00:24:21.607 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.607 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.608 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.866 nvme0n1 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.866 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.124 nvme0n1 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.124 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.125 16:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.383 nvme0n1 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.383 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.384 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.642 nvme0n1 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.642 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.643 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.643 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.643 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.643 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.906 nvme0n1 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.906 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.907 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.174 nvme0n1 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.174 16:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 nvme0n1 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.691 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.950 nvme0n1 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.950 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.208 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.208 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.208 16:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.466 nvme0n1 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.031 nvme0n1 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:25.031 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.032 16:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 nvme0n1 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.290 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.858 nvme0n1 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.858 16:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.423 nvme0n1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.423 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.989 nvme0n1 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.989 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.246 16:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.812 nvme0n1 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.812 16:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 nvme0n1 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.377 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.378 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.943 nvme0n1 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.943 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.201 nvme0n1 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:29.201 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.202 16:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.460 nvme0n1 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.460 nvme0n1 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.460 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.719 nvme0n1 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.719 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.720 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.978 nvme0n1 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.978 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.237 nvme0n1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.237 16:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.495 nvme0n1 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.495 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.496 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.754 nvme0n1 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.754 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 nvme0n1 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.013 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.271 nvme0n1 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.271 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.272 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.272 16:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 nvme0n1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.530 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.789 nvme0n1 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.789 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 nvme0n1 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.047 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.048 16:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.306 nvme0n1 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.306 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.563 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.563 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.564 nvme0n1 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.564 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.821 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.822 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.822 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.822 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.079 nvme0n1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.079 16:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.645 nvme0n1 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.645 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.904 nvme0n1 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.904 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.163 16:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.420 nvme0n1 00:24:34.420 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.420 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.420 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.421 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.987 nvme0n1 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY4OGQ2YzA4Njg5ZDVjMDc4NjFkMTUzNWNkOTgzYzd2hg/o: 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzE1OGJiYjFjMzAzYWYzMjM4OTg0ODNiNDc5MjFjYzRmNzlkMjU2ZTk0MGFiYWVjZDNlNWViZjYxM2ZjODVjMp7UkQA=: 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.987 16:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.552 nvme0n1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.552 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.118 nvme0n1 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.118 16:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.683 nvme0n1 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.683 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjY1MDJkNzM4NzViMTUwYjI5MmQ3Y2M0MGE4NWU1YzExMDAwYWQ0NjUyNzRkMzc0BiAmLQ==: 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: ]] 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGY3NGQ3NjExOTMwOTI3OTdiMWIwYTc0MWM2YWI3YzP1Obo5: 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.940 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.941 16:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.505 nvme0n1 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmE4YzI3NDM2MjhjYTQwYjBhNDE0NTUxY2M5MGNlMjZkYzQ3ZjA2ZjFjZmU5ZDdlYWNiMjEwZmZlZDA0YWI2YhKpksk=: 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.505 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.070 nvme0n1 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.070 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.071 request: 00:24:38.071 { 00:24:38.071 "name": "nvme0", 00:24:38.071 "trtype": "tcp", 00:24:38.071 "traddr": "10.0.0.1", 00:24:38.071 "adrfam": "ipv4", 00:24:38.071 "trsvcid": "4420", 00:24:38.071 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:38.071 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:38.071 "prchk_reftag": false, 00:24:38.071 "prchk_guard": false, 00:24:38.071 "hdgst": false, 00:24:38.071 "ddgst": false, 00:24:38.071 "allow_unrecognized_csi": false, 00:24:38.071 "method": "bdev_nvme_attach_controller", 00:24:38.071 "req_id": 1 00:24:38.071 } 00:24:38.071 Got JSON-RPC error response 00:24:38.071 response: 00:24:38.071 { 00:24:38.071 "code": -5, 00:24:38.071 "message": "Input/output error" 00:24:38.071 } 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.071 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.329 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.329 request: 00:24:38.329 { 00:24:38.330 "name": "nvme0", 00:24:38.330 "trtype": "tcp", 00:24:38.330 "traddr": "10.0.0.1", 00:24:38.330 "adrfam": "ipv4", 00:24:38.330 "trsvcid": "4420", 00:24:38.330 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:38.330 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:38.330 "prchk_reftag": false, 00:24:38.330 "prchk_guard": false, 00:24:38.330 "hdgst": false, 00:24:38.330 "ddgst": false, 00:24:38.330 "dhchap_key": "key2", 00:24:38.330 "allow_unrecognized_csi": false, 00:24:38.330 "method": "bdev_nvme_attach_controller", 00:24:38.330 "req_id": 1 00:24:38.330 } 00:24:38.330 Got JSON-RPC error response 00:24:38.330 response: 00:24:38.330 { 00:24:38.330 "code": -5, 00:24:38.330 "message": "Input/output error" 00:24:38.330 } 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.330 16:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.330 request: 00:24:38.330 { 00:24:38.330 "name": "nvme0", 00:24:38.330 "trtype": "tcp", 00:24:38.330 "traddr": "10.0.0.1", 00:24:38.330 "adrfam": "ipv4", 00:24:38.330 "trsvcid": "4420", 00:24:38.330 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:38.330 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:38.330 "prchk_reftag": false, 00:24:38.330 "prchk_guard": false, 00:24:38.330 "hdgst": false, 00:24:38.330 "ddgst": false, 00:24:38.330 "dhchap_key": "key1", 00:24:38.330 "dhchap_ctrlr_key": "ckey2", 00:24:38.330 "allow_unrecognized_csi": false, 00:24:38.330 "method": "bdev_nvme_attach_controller", 00:24:38.330 "req_id": 1 00:24:38.330 } 00:24:38.330 Got JSON-RPC error response 00:24:38.330 response: 00:24:38.330 { 00:24:38.330 "code": -5, 00:24:38.330 "message": "Input/output error" 00:24:38.330 } 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.330 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.588 nvme0n1 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.588 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.845 request: 00:24:38.845 { 00:24:38.845 "name": "nvme0", 00:24:38.845 "dhchap_key": "key1", 00:24:38.845 "dhchap_ctrlr_key": "ckey2", 00:24:38.845 "method": "bdev_nvme_set_keys", 00:24:38.845 "req_id": 1 00:24:38.845 } 00:24:38.845 Got JSON-RPC error response 00:24:38.845 response: 00:24:38.845 { 00:24:38.845 "code": -13, 00:24:38.845 "message": "Permission denied" 00:24:38.845 } 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:38.845 16:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:39.776 16:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:40.706 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.706 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:40.706 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.706 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.706 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDM1Y2M3YTkzNzljY2I4YTBiNDQxOWM5NGRhODMzMTg1OWNiOTBhMzNiMTQwNGIzLIHrIg==: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTlmYmM1MDFlNjMyNjA5ZjgyMDM1MDFlMDE0Njg3ZDQ1YjY0Y2M5ZGFjZmEwMzZiG5unHg==: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.964 nvme0n1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGI3NDI4OGI3MTc5MWM0MjMwMDBiYWUwMmEwYmYyZTizNc42: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2EwN2U5ZTNkMDhiNGVhYTllOWQ4YzM2NWZmNmZjMWSnEn3U: 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.964 request: 00:24:40.964 { 00:24:40.964 "name": "nvme0", 00:24:40.964 "dhchap_key": "key2", 00:24:40.964 "dhchap_ctrlr_key": "ckey1", 00:24:40.964 "method": "bdev_nvme_set_keys", 00:24:40.964 "req_id": 1 00:24:40.964 } 00:24:40.964 Got JSON-RPC error response 00:24:40.964 response: 00:24:40.964 { 00:24:40.964 "code": -13, 00:24:40.964 "message": "Permission denied" 00:24:40.964 } 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.964 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:41.221 16:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.152 rmmod nvme_tcp 00:24:42.152 rmmod nvme_fabrics 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2941953 ']' 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2941953 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2941953 ']' 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2941953 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941953 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941953' 00:24:42.152 killing process with pid 2941953 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2941953 00:24:42.152 16:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2941953 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.410 16:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:44.865 16:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:47.392 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:47.392 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:48.767 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:48.767 16:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.257 /tmp/spdk.key-null.yw4 /tmp/spdk.key-sha256.IoW /tmp/spdk.key-sha384.lB6 /tmp/spdk.key-sha512.I2L /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:48.767 16:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.299 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:51.299 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:24:51.299 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:24:51.557 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:24:51.557 00:24:51.557 real 0m53.181s 00:24:51.557 user 0m46.902s 00:24:51.557 sys 0m12.312s 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.557 ************************************ 00:24:51.557 END TEST nvmf_auth_host 00:24:51.557 ************************************ 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.557 ************************************ 00:24:51.557 START TEST nvmf_digest 00:24:51.557 ************************************ 00:24:51.557 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:51.816 * Looking for test storage... 00:24:51.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:51.816 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.817 --rc genhtml_branch_coverage=1 00:24:51.817 --rc genhtml_function_coverage=1 00:24:51.817 --rc genhtml_legend=1 00:24:51.817 --rc geninfo_all_blocks=1 00:24:51.817 --rc geninfo_unexecuted_blocks=1 00:24:51.817 00:24:51.817 ' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.817 --rc genhtml_branch_coverage=1 00:24:51.817 --rc genhtml_function_coverage=1 00:24:51.817 --rc genhtml_legend=1 00:24:51.817 --rc geninfo_all_blocks=1 00:24:51.817 --rc geninfo_unexecuted_blocks=1 00:24:51.817 00:24:51.817 ' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.817 --rc genhtml_branch_coverage=1 00:24:51.817 --rc genhtml_function_coverage=1 00:24:51.817 --rc genhtml_legend=1 00:24:51.817 --rc geninfo_all_blocks=1 00:24:51.817 --rc geninfo_unexecuted_blocks=1 00:24:51.817 00:24:51.817 ' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.817 --rc genhtml_branch_coverage=1 00:24:51.817 --rc genhtml_function_coverage=1 00:24:51.817 --rc genhtml_legend=1 00:24:51.817 --rc geninfo_all_blocks=1 00:24:51.817 --rc geninfo_unexecuted_blocks=1 00:24:51.817 00:24:51.817 ' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.817 16:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.536 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.536 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.536 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.536 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:24:58.536 00:24:58.536 --- 10.0.0.2 ping statistics --- 00:24:58.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.536 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:58.536 00:24:58.536 --- 10.0.0.1 ping statistics --- 00:24:58.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.536 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:58.536 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 ************************************ 00:24:58.537 START TEST nvmf_digest_clean 00:24:58.537 ************************************ 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2956220 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2956220 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2956220 ']' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 [2024-11-04 16:36:24.545721] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:24:58.537 [2024-11-04 16:36:24.545771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.537 [2024-11-04 16:36:24.607832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.537 [2024-11-04 16:36:24.649240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.537 [2024-11-04 16:36:24.649272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.537 [2024-11-04 16:36:24.649279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.537 [2024-11-04 16:36:24.649285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.537 [2024-11-04 16:36:24.649291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.537 [2024-11-04 16:36:24.649855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 null0 00:24:58.537 [2024-11-04 16:36:24.817080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.537 [2024-11-04 16:36:24.841292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2956242 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2956242 /var/tmp/bperf.sock 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2956242 ']' 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.537 16:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.537 [2024-11-04 16:36:24.894241] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:24:58.537 [2024-11-04 16:36:24.894286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956242 ] 00:24:58.537 [2024-11-04 16:36:24.957290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.537 [2024-11-04 16:36:25.000352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.537 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.103 nvme0n1 00:24:59.103 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.103 16:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.103 Running I/O for 2 seconds... 00:25:00.969 23789.00 IOPS, 92.93 MiB/s [2024-11-04T15:36:27.793Z] 24256.00 IOPS, 94.75 MiB/s 00:25:00.969 Latency(us) 00:25:00.969 [2024-11-04T15:36:27.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.969 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:00.969 nvme0n1 : 2.00 24277.42 94.83 0.00 0.00 5268.48 2621.44 16227.96 00:25:00.969 [2024-11-04T15:36:27.793Z] =================================================================================================================== 00:25:00.969 [2024-11-04T15:36:27.793Z] Total : 24277.42 94.83 0.00 0.00 5268.48 2621.44 16227.96 00:25:00.969 { 00:25:00.969 "results": [ 00:25:00.969 { 00:25:00.969 "job": "nvme0n1", 00:25:00.969 "core_mask": "0x2", 00:25:00.969 "workload": "randread", 00:25:00.969 "status": "finished", 00:25:00.969 "queue_depth": 128, 00:25:00.969 "io_size": 4096, 00:25:00.969 "runtime": 2.003508, 00:25:00.969 "iops": 24277.4174098631, 00:25:00.969 "mibps": 94.83366175727774, 00:25:00.969 "io_failed": 0, 00:25:00.969 "io_timeout": 0, 00:25:00.969 "avg_latency_us": 5268.475669172933, 00:25:00.969 "min_latency_us": 2621.44, 00:25:00.969 "max_latency_us": 16227.961904761905 00:25:00.969 } 00:25:00.969 ], 00:25:00.969 "core_count": 1 00:25:00.969 } 00:25:00.969 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:00.969 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:00.969 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:00.969 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:00.969 | select(.opcode=="crc32c") 00:25:00.969 | "\(.module_name) \(.executed)"' 00:25:00.969 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2956242 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2956242 ']' 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2956242 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.227 16:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956242 00:25:01.227 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:01.227 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:01.227 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956242' 00:25:01.227 killing process with pid 2956242 00:25:01.227 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2956242 00:25:01.227 Received shutdown signal, test time was about 2.000000 seconds 00:25:01.227 00:25:01.227 Latency(us) 00:25:01.227 [2024-11-04T15:36:28.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.227 [2024-11-04T15:36:28.051Z] =================================================================================================================== 00:25:01.227 [2024-11-04T15:36:28.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.227 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2956242 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2956717 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2956717 /var/tmp/bperf.sock 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2956717 ']' 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.485 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:01.485 [2024-11-04 16:36:28.236117] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:01.485 [2024-11-04 16:36:28.236169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956717 ] 00:25:01.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:01.485 Zero copy mechanism will not be used. 00:25:01.485 [2024-11-04 16:36:28.299850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.742 [2024-11-04 16:36:28.337190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.742 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.742 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:01.742 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:01.742 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:01.742 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.999 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.999 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.257 nvme0n1 00:25:02.257 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:02.257 16:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:02.257 Zero copy mechanism will not be used. 00:25:02.257 Running I/O for 2 seconds... 00:25:04.562 5760.00 IOPS, 720.00 MiB/s [2024-11-04T15:36:31.386Z] 5687.00 IOPS, 710.88 MiB/s 00:25:04.562 Latency(us) 00:25:04.562 [2024-11-04T15:36:31.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.562 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:04.562 nvme0n1 : 2.00 5685.80 710.73 0.00 0.00 2811.39 807.50 12795.12 00:25:04.562 [2024-11-04T15:36:31.386Z] =================================================================================================================== 00:25:04.562 [2024-11-04T15:36:31.386Z] Total : 5685.80 710.73 0.00 0.00 2811.39 807.50 12795.12 00:25:04.562 { 00:25:04.562 "results": [ 00:25:04.562 { 00:25:04.562 "job": "nvme0n1", 00:25:04.562 "core_mask": "0x2", 00:25:04.562 "workload": "randread", 00:25:04.562 "status": "finished", 00:25:04.562 "queue_depth": 16, 00:25:04.562 "io_size": 131072, 00:25:04.562 "runtime": 2.003236, 00:25:04.562 "iops": 5685.800374993261, 00:25:04.562 "mibps": 710.7250468741577, 00:25:04.562 "io_failed": 0, 00:25:04.562 "io_timeout": 0, 00:25:04.562 "avg_latency_us": 2811.3876118566827, 00:25:04.562 "min_latency_us": 807.4971428571429, 00:25:04.562 "max_latency_us": 12795.12380952381 00:25:04.562 } 00:25:04.562 ], 00:25:04.562 "core_count": 1 00:25:04.562 } 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:04.562 | select(.opcode=="crc32c") 00:25:04.562 | "\(.module_name) \(.executed)"' 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2956717 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2956717 ']' 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2956717 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956717 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956717' 00:25:04.562 killing process with pid 2956717 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2956717 00:25:04.562 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.562 00:25:04.562 Latency(us) 00:25:04.562 [2024-11-04T15:36:31.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.562 [2024-11-04T15:36:31.386Z] =================================================================================================================== 00:25:04.562 [2024-11-04T15:36:31.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.562 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2956717 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2957367 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2957367 /var/tmp/bperf.sock 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2957367 ']' 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.820 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:04.820 [2024-11-04 16:36:31.540927] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:04.820 [2024-11-04 16:36:31.540978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957367 ] 00:25:04.820 [2024-11-04 16:36:31.604860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.077 [2024-11-04 16:36:31.646604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.077 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.077 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:05.077 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:05.077 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:05.077 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:05.333 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.333 16:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.589 nvme0n1 00:25:05.589 16:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.589 16:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.589 Running I/O for 2 seconds... 00:25:07.894 28857.00 IOPS, 112.72 MiB/s [2024-11-04T15:36:34.718Z] 28940.50 IOPS, 113.05 MiB/s 00:25:07.894 Latency(us) 00:25:07.894 [2024-11-04T15:36:34.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.894 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:07.894 nvme0n1 : 2.00 28944.82 113.07 0.00 0.00 4416.54 1872.46 8800.55 00:25:07.894 [2024-11-04T15:36:34.718Z] =================================================================================================================== 00:25:07.894 [2024-11-04T15:36:34.718Z] Total : 28944.82 113.07 0.00 0.00 4416.54 1872.46 8800.55 00:25:07.894 { 00:25:07.894 "results": [ 00:25:07.894 { 00:25:07.894 "job": "nvme0n1", 00:25:07.894 "core_mask": "0x2", 00:25:07.894 "workload": "randwrite", 00:25:07.894 "status": "finished", 00:25:07.894 "queue_depth": 128, 00:25:07.894 "io_size": 4096, 00:25:07.894 "runtime": 2.004124, 00:25:07.894 "iops": 28944.81578984135, 00:25:07.894 "mibps": 113.06568667906777, 00:25:07.894 "io_failed": 0, 00:25:07.894 "io_timeout": 0, 00:25:07.894 "avg_latency_us": 4416.536312444128, 00:25:07.894 "min_latency_us": 1872.4571428571428, 00:25:07.894 "max_latency_us": 8800.548571428571 00:25:07.894 } 00:25:07.894 ], 00:25:07.894 "core_count": 1 00:25:07.894 } 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.894 | select(.opcode=="crc32c") 00:25:07.894 | "\(.module_name) \(.executed)"' 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.894 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2957367 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2957367 ']' 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2957367 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957367 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957367' 00:25:07.895 killing process with pid 2957367 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2957367 00:25:07.895 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.895 00:25:07.895 Latency(us) 00:25:07.895 [2024-11-04T15:36:34.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.895 [2024-11-04T15:36:34.719Z] =================================================================================================================== 00:25:07.895 [2024-11-04T15:36:34.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.895 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2957367 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2957879 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2957879 /var/tmp/bperf.sock 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2957879 ']' 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.153 [2024-11-04 16:36:34.802126] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:08.153 [2024-11-04 16:36:34.802173] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957879 ] 00:25:08.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.153 Zero copy mechanism will not be used. 00:25:08.153 [2024-11-04 16:36:34.865357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.153 [2024-11-04 16:36:34.901973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:08.153 16:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.411 16:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.411 16:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.977 nvme0n1 00:25:08.977 16:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:08.977 16:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:08.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.977 Zero copy mechanism will not be used. 00:25:08.977 Running I/O for 2 seconds... 00:25:11.286 6305.00 IOPS, 788.12 MiB/s [2024-11-04T15:36:38.110Z] 6471.00 IOPS, 808.88 MiB/s 00:25:11.286 Latency(us) 00:25:11.286 [2024-11-04T15:36:38.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.286 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:11.286 nvme0n1 : 2.00 6469.17 808.65 0.00 0.00 2469.29 1287.31 4618.73 00:25:11.286 [2024-11-04T15:36:38.110Z] =================================================================================================================== 00:25:11.286 [2024-11-04T15:36:38.110Z] Total : 6469.17 808.65 0.00 0.00 2469.29 1287.31 4618.73 00:25:11.286 { 00:25:11.286 "results": [ 00:25:11.286 { 00:25:11.286 "job": "nvme0n1", 00:25:11.286 "core_mask": "0x2", 00:25:11.286 "workload": "randwrite", 00:25:11.286 "status": "finished", 00:25:11.286 "queue_depth": 16, 00:25:11.286 "io_size": 131072, 00:25:11.286 "runtime": 2.003039, 00:25:11.286 "iops": 6469.170096039069, 00:25:11.286 "mibps": 808.6462620048836, 00:25:11.286 "io_failed": 0, 00:25:11.286 "io_timeout": 0, 00:25:11.286 "avg_latency_us": 2469.288707986976, 00:25:11.286 "min_latency_us": 1287.3142857142857, 00:25:11.286 "max_latency_us": 4618.727619047619 00:25:11.286 } 00:25:11.286 ], 00:25:11.286 "core_count": 1 00:25:11.286 } 00:25:11.286 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:11.287 | select(.opcode=="crc32c") 00:25:11.287 | "\(.module_name) \(.executed)"' 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2957879 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2957879 ']' 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2957879 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957879 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957879' 00:25:11.287 killing process with pid 2957879 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2957879 00:25:11.287 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.287 00:25:11.287 Latency(us) 00:25:11.287 [2024-11-04T15:36:38.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.287 [2024-11-04T15:36:38.111Z] =================================================================================================================== 00:25:11.287 [2024-11-04T15:36:38.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.287 16:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2957879 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2956220 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2956220 ']' 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2956220 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956220 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956220' 00:25:11.545 killing process with pid 2956220 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2956220 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2956220 00:25:11.545 00:25:11.545 real 0m13.851s 00:25:11.545 user 0m26.454s 00:25:11.545 sys 0m4.437s 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.545 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:11.545 ************************************ 00:25:11.545 END TEST nvmf_digest_clean 00:25:11.545 ************************************ 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:11.803 ************************************ 00:25:11.803 START TEST nvmf_digest_error 00:25:11.803 ************************************ 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2958428 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2958428 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2958428 ']' 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.803 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.803 [2024-11-04 16:36:38.470102] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:11.803 [2024-11-04 16:36:38.470147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.803 [2024-11-04 16:36:38.536746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.804 [2024-11-04 16:36:38.576770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.804 [2024-11-04 16:36:38.576806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.804 [2024-11-04 16:36:38.576813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.804 [2024-11-04 16:36:38.576819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.804 [2024-11-04 16:36:38.576825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.804 [2024-11-04 16:36:38.577388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.804 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.804 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:11.804 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.804 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.804 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 [2024-11-04 16:36:38.653862] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 null0 00:25:12.062 [2024-11-04 16:36:38.745200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.062 [2024-11-04 16:36:38.769417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2958613 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2958613 /var/tmp/bperf.sock 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2958613 ']' 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.062 16:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.062 [2024-11-04 16:36:38.808346] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:12.062 [2024-11-04 16:36:38.808385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958613 ] 00:25:12.062 [2024-11-04 16:36:38.871591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.320 [2024-11-04 16:36:38.915287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.320 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.320 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:12.320 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:12.320 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:12.578 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:12.578 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.578 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.579 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:12.579 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:12.836 nvme0n1 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:12.836 16:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.094 Running I/O for 2 seconds... 00:25:13.094 [2024-11-04 16:36:39.741338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.741372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.741383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.752291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.752316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.752326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.760329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.760353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.760362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.772173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.772206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.780584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.780614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.791546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.791578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.803644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.803666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.811880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.811900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.811908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.823251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.823281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.834100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.834122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.834130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.847310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.847332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.847341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.855530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.855551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.855564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.866688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.866710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.866718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.878168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.878190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.878216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.888287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.888310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.888319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.901169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.901192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.901201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.094 [2024-11-04 16:36:39.911829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.094 [2024-11-04 16:36:39.911851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.094 [2024-11-04 16:36:39.911860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.922891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.922915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.933992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.934015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.934023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.942488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.942508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.952502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.952527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.952536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.961898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.961920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.961928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.971248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.971269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.971277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.980646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.980667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.980675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.990936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.990959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.990967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:39.998797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:39.998819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:39.998827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:40.009951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:40.009974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:40.009983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:40.019494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:40.019516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:40.019524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:40.029051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:40.029071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:40.029080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:40.041921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:40.041944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:40.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.352 [2024-11-04 16:36:40.054112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.352 [2024-11-04 16:36:40.054133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.352 [2024-11-04 16:36:40.054141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.066558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.066580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.066589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.077449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.077486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.077496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.085994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.086015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.086024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.097635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.097657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.097665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.108313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.108334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.108342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.116433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.116455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.116463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.126312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.126333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.126349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.137262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.137292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.145401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.145423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.145431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.155951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.155973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.155981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.165747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.165767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.165775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.353 [2024-11-04 16:36:40.173707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.353 [2024-11-04 16:36:40.173727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.353 [2024-11-04 16:36:40.173735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.185714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.185734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.185742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.198612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.198633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.198641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.206630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.206650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.206657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.218045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.218067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.218075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.228159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.228180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.228188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.611 [2024-11-04 16:36:40.237865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.611 [2024-11-04 16:36:40.237886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.611 [2024-11-04 16:36:40.237894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.245878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.245898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.245906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.256665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.266279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.266299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.266308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.275330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.275351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.275359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.286044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.286064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.286073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.294009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.294030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.294041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.305117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.305138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.305146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.317547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.317568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.317577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.328696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.328718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.328726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.336974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.336996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.337004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.348727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.348748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.348756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.357045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.357066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.357075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.368500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.368530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.379115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.387739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.387764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.387772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.398821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.398841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.398849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.407652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.407673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.407681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.420773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.420795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.420803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.612 [2024-11-04 16:36:40.429119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.612 [2024-11-04 16:36:40.429139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.612 [2024-11-04 16:36:40.429148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.440540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.440561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.440569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.452666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.452687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.465186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.465207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.477964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.477993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.490263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.490284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.490292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.498460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.498480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.498489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.510134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.510160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.510168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.521664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.521686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.521694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.530921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.530942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.530951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.542722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.542744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.542752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.554055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.554075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.562892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.572270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.572290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.572302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.581538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.581558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.581566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.591859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.591880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.591889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.601671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.601691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.601699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.611107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.611128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.611136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.619699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.619721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.619730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.629337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.629357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.871 [2024-11-04 16:36:40.629365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.871 [2024-11-04 16:36:40.639207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.871 [2024-11-04 16:36:40.639227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.639235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.872 [2024-11-04 16:36:40.647749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.872 [2024-11-04 16:36:40.647771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.647779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.872 [2024-11-04 16:36:40.658279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.872 [2024-11-04 16:36:40.658303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.658311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.872 [2024-11-04 16:36:40.667632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.872 [2024-11-04 16:36:40.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.667662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.872 [2024-11-04 16:36:40.676515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.872 [2024-11-04 16:36:40.676536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.676544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.872 [2024-11-04 16:36:40.687061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:13.872 [2024-11-04 16:36:40.687081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.872 [2024-11-04 16:36:40.687089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.695330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.705649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.705669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.705677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.714359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.714380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.714388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.725176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.725198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.725206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 24848.00 IOPS, 97.06 MiB/s [2024-11-04T15:36:40.955Z] [2024-11-04 16:36:40.737197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.737219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.737230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.746330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.746351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.746359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.757611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.757634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.757643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.768962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.768984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.768992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.780575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.780612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.788879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.788901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.788909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.799938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.799960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.799968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.808755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.808775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.808783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.817425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.817454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.826918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.826943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.826952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.836439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.836460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.836468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.845680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.845701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.845709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.853977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.853998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.854006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.864595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.864621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.864629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.874279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.874301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.874309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.884427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.884447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.892683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.892705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.892714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.904956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.904985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.915782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.915803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.915811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.925924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.925944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.925952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.934593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.934617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.944612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.131 [2024-11-04 16:36:40.944633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.131 [2024-11-04 16:36:40.944641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.131 [2024-11-04 16:36:40.954181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.132 [2024-11-04 16:36:40.954202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.132 [2024-11-04 16:36:40.954211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:40.962820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:40.962840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:40.962848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:40.972387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:40.972407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:40.972416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:40.981635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:40.981656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:40.981664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:40.990637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:40.990658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:40.990670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:41.000297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:41.000317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:41.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:41.011780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:41.011801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:41.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.390 [2024-11-04 16:36:41.020197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.390 [2024-11-04 16:36:41.020219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.390 [2024-11-04 16:36:41.020227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.031599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.031625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.031633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.040736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.040765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.049497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.049518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.049527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.058589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.058614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.058622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.068203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.068231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.077339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.077363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.077371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.087070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.087091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.099396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.099417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.099426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.108181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.108202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.108210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.117002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.117022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.117031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.126604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.126624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.126632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.136769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.136789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.136797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.145544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.145566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.145575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.155131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.155153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.155161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.165134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.165155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.165164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.175578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.175605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.175614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.183748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.183770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.183779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.192799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.192820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.192828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.202740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.202763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.202771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.391 [2024-11-04 16:36:41.213451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.391 [2024-11-04 16:36:41.213472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.391 [2024-11-04 16:36:41.213480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.222202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.222223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.222231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.233228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.233250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.233258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.241366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.241388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.241399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.253559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.253580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.253588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.261981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.262001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.262009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.273264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.273293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.283515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.283536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.283543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.291968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.291988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.291996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.301797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.301817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.650 [2024-11-04 16:36:41.301825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.650 [2024-11-04 16:36:41.311616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.650 [2024-11-04 16:36:41.311636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.311644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.319683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.319704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.329378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.329402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.329411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.339358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.339379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.339387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.347654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.347675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.347683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.355895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.355917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.355925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.366104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.366124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.366132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.375697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.375718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.375726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.384351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.384372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.384380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.394489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.394510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.403923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.403945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.403961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.412712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.412733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.412741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.423329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.423349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.423357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.432419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.432439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.432448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.441293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.441314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.441322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.450448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.450478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.460434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.460455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.460463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.651 [2024-11-04 16:36:41.468664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.651 [2024-11-04 16:36:41.468686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.651 [2024-11-04 16:36:41.468694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.479752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.479774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.479782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.489310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.489334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.489342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.497752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.497773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.497782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.507853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.507874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.507882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.516187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.516208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.516216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.527276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.527297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.527304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.538189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.909 [2024-11-04 16:36:41.538210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.909 [2024-11-04 16:36:41.538218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.909 [2024-11-04 16:36:41.546204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.546225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.546233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.557813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.557834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.565986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.566006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.566014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.577810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.577831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.577838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.588562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.588583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.588591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.601851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.601873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.601882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.609370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.609391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.609399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.621469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.621497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.632732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.632754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.632762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.641899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.641918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.650766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.650787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.650795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.660938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.660959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.660970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.668611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.668633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.668640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.680019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.680040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.680048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.692408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.692429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.692437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.704876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.704897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.704905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.716172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.716193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.716202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.910 [2024-11-04 16:36:41.726064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:14.910 [2024-11-04 16:36:41.726085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.910 [2024-11-04 16:36:41.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.168 25475.00 IOPS, 99.51 MiB/s [2024-11-04T15:36:41.992Z] [2024-11-04 16:36:41.735596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xac6370) 00:25:15.168 [2024-11-04 16:36:41.735619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.168 [2024-11-04 16:36:41.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.168 00:25:15.168 Latency(us) 00:25:15.168 [2024-11-04T15:36:41.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:15.168 nvme0n1 : 2.00 25497.01 99.60 0.00 0.00 5014.75 2309.36 18225.25 00:25:15.168 [2024-11-04T15:36:41.992Z] =================================================================================================================== 00:25:15.168 [2024-11-04T15:36:41.992Z] Total : 25497.01 99.60 0.00 0.00 5014.75 2309.36 18225.25 00:25:15.168 { 00:25:15.168 "results": [ 00:25:15.168 { 00:25:15.168 "job": "nvme0n1", 00:25:15.168 "core_mask": "0x2", 00:25:15.168 "workload": "randread", 00:25:15.168 "status": "finished", 00:25:15.168 "queue_depth": 128, 00:25:15.168 "io_size": 4096, 00:25:15.168 "runtime": 2.004431, 00:25:15.168 "iops": 25497.01137130687, 00:25:15.168 "mibps": 99.59770066916747, 00:25:15.168 "io_failed": 0, 00:25:15.168 "io_timeout": 0, 00:25:15.168 "avg_latency_us": 5014.747065531047, 00:25:15.168 "min_latency_us": 2309.3638095238093, 00:25:15.168 "max_latency_us": 18225.249523809525 00:25:15.168 } 00:25:15.168 ], 00:25:15.168 "core_count": 1 00:25:15.168 } 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:15.168 | .driver_specific 00:25:15.168 | .nvme_error 00:25:15.168 | .status_code 00:25:15.168 | .command_transient_transport_error' 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2958613 00:25:15.168 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2958613 ']' 00:25:15.169 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2958613 00:25:15.169 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:15.169 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.169 16:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958613 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958613' 00:25:15.427 killing process with pid 2958613 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2958613 00:25:15.427 Received shutdown signal, test time was about 2.000000 seconds 00:25:15.427 00:25:15.427 Latency(us) 00:25:15.427 [2024-11-04T15:36:42.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.427 [2024-11-04T15:36:42.251Z] =================================================================================================================== 00:25:15.427 [2024-11-04T15:36:42.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2958613 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959093 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959093 /var/tmp/bperf.sock 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2959093 ']' 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.427 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.427 [2024-11-04 16:36:42.189672] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:15.427 [2024-11-04 16:36:42.189718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959093 ] 00:25:15.427 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.427 Zero copy mechanism will not be used. 00:25:15.427 [2024-11-04 16:36:42.251290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.685 [2024-11-04 16:36:42.293917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.685 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.685 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:15.685 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.685 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.943 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.943 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.943 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.943 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.943 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.944 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.202 nvme0n1 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:16.202 16:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.202 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.202 Zero copy mechanism will not be used. 00:25:16.202 Running I/O for 2 seconds... 00:25:16.202 [2024-11-04 16:36:42.940813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.202 [2024-11-04 16:36:42.940851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.202 [2024-11-04 16:36:42.940863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.202 [2024-11-04 16:36:42.946759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.946786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.946796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.952638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.952662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.952671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.958480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.958503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.958512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.965550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.965573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.973011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.973035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.973044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.981134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.981159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.981169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.988446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.988471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.988480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:42.995998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:42.996022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:42.996036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:43.002654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:43.002678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:43.002687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:43.010367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:43.010390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:43.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:43.017128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:43.017150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:43.017159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.203 [2024-11-04 16:36:43.022899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.203 [2024-11-04 16:36:43.022923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.203 [2024-11-04 16:36:43.022931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.029928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.029951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.029960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.038414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.038437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.038446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.045717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.045739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.045747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.051206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.051228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.051237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.056233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.056258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.056267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.061022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.061045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.061054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.065776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.065798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.065807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.070826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.070857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.076207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.076229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.076237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.082254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.082276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.082284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.088159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.088181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.093885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.093908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.093916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.099707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.099729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.099737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.105302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.105324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.105332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.111588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.111615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.117274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.117295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.117303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.122965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.122986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.122994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.128814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.128835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.128844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.134767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.134788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.134797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.140491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.140512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.140520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.146558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.146580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.463 [2024-11-04 16:36:43.146588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.463 [2024-11-04 16:36:43.152620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.463 [2024-11-04 16:36:43.152642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.152656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.158476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.158497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.158505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.164269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.164290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.164298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.170044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.170065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.170073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.175785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.175806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.175814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.181455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.181477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.181485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.187118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.187140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.187148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.192583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.192610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.192619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.198251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.198274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.198283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.203549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.203576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.203586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.208992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.209013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.209022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.214133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.214156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.214164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.219543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.219572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.225069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.225092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.225101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.230372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.230403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.235782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.235804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.240989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.241012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.241020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.246471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.246493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.246501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.252111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.252132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.252141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.258180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.258202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.258210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.264634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.264656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.264665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.270692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.270714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.270722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.274446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.274467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.278624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.278647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.278655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.464 [2024-11-04 16:36:43.284041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.464 [2024-11-04 16:36:43.284064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.464 [2024-11-04 16:36:43.284072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.289570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.289592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.289605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.295066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.295089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.295100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.300427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.300449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.300457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.305783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.305804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.305812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.311425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.311446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.311454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.317150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.317173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.322537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.322568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.328068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.328090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.328098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.333575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.333610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.338901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.338922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.338931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.344512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.344533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.344542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.349990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.350012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.350020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.355629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.355650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.355658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.361104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.361126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.361134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.366491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.366513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.366521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.372536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.372558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.372565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.378622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.378643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.378651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.384495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.724 [2024-11-04 16:36:43.384517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.724 [2024-11-04 16:36:43.384525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.724 [2024-11-04 16:36:43.390538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.390561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.390573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.396683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.396705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.396713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.402703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.402725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.402733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.408590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.408617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.408625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.414399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.414421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.414429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.420132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.420153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.420161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.426004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.426026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.426035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.431589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.431615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.431624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.437275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.437297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.437305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.443007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.443033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.443041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.448860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.448881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.448891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.454314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.454338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.460417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.460438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.460446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.466442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.466464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.472349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.472370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.472379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.478085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.478106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.478113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.483736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.483758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.483767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.489488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.489509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.489517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.495114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.495136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.495145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.500851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.500874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.500882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.506597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.506625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.506634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.511999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.512021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.512029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.517440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.517462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.523075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.523097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.523105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.725 [2024-11-04 16:36:43.528489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.725 [2024-11-04 16:36:43.528511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.725 [2024-11-04 16:36:43.528519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.726 [2024-11-04 16:36:43.534110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.726 [2024-11-04 16:36:43.534132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.726 [2024-11-04 16:36:43.534140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.726 [2024-11-04 16:36:43.539576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.726 [2024-11-04 16:36:43.539598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.726 [2024-11-04 16:36:43.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.726 [2024-11-04 16:36:43.545254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.726 [2024-11-04 16:36:43.545277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.726 [2024-11-04 16:36:43.545286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.985 [2024-11-04 16:36:43.551023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.985 [2024-11-04 16:36:43.551047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.985 [2024-11-04 16:36:43.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.985 [2024-11-04 16:36:43.556415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.561878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.561899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.561907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.567486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.567508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.567516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.573404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.573426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.573434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.578819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.578840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.578849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.584416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.584438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.584447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.589042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.589068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.589076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.592036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.592058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.592066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.596721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.596742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.596750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.602054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.602076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.602084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.607458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.607479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.607489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.613132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.613154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.613163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.619371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.619393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.619402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.625856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.625879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.625888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.631758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.631780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.631791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.637538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.637559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.637567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.643393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.643415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.643425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.648940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.648962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.648970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.654540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.654570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.660190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.660212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.660220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.665163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.665185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.665194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.670515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.670537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.670545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.676093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.676123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.682108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.986 [2024-11-04 16:36:43.682134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-11-04 16:36:43.682143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.986 [2024-11-04 16:36:43.687592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.687622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.687631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.693254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.693276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.693284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.696843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.696865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.696874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.700950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.700973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.700983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.705929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.705958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.711700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.711722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.711730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.717217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.717248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.722613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.722634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.722643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.727940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.727962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.727970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.733668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.733690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.733698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.739377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.739398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.739406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.744834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.744856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.744865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.750582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.750613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.750622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.756250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.761689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.761711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.761719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.767350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.767372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.767380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.772940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.772962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.772973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.778717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.778738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.784784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.784806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.784814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.790761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.790783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.790791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.796666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.796688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.796697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.802586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.802613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.802622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.987 [2024-11-04 16:36:43.807939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:16.987 [2024-11-04 16:36:43.807962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.987 [2024-11-04 16:36:43.807971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.247 [2024-11-04 16:36:43.813275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.247 [2024-11-04 16:36:43.813297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-11-04 16:36:43.813306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.247 [2024-11-04 16:36:43.819019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.247 [2024-11-04 16:36:43.819042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-11-04 16:36:43.819050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.247 [2024-11-04 16:36:43.824101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.824127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.824135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.831692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.831715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.831723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.838195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.838218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.844338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.844361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.844369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.850785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.850817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.857552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.857575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.857583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.864199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.864222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.864232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.872036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.872059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.872067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.879061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.879085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.879099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.886319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.886343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.886351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.893645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.893668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.893677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.900729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.900752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.900761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.908663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.908687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.908696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.916117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.916139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.923368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.923391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.923400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 5300.00 IOPS, 662.50 MiB/s [2024-11-04T15:36:44.072Z] [2024-11-04 16:36:43.932120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.932140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.932148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.938863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.938885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.938894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.942145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.942172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.942180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.948008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.948029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.948038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.953413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.953439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.953448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.959020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.959043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.964703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.964726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.964735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.972513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.972535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.972543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.978597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.978627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.248 [2024-11-04 16:36:43.978636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.248 [2024-11-04 16:36:43.983700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.248 [2024-11-04 16:36:43.983721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:43.983730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:43.989016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:43.989037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:43.989046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:43.994295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:43.994318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:43.994326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:43.999742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:43.999765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:43.999773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.005327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.005350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.005359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.010881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.016216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.016238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.016247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.022008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.022030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.022039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.027634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.027655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.027664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.033064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.033086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.033095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.038517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.038538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.038551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.043843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.043866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.043875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.049117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.049138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.054321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.054342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.054351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.059250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.059272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.059282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.065340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.065366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.065375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.249 [2024-11-04 16:36:44.070689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.249 [2024-11-04 16:36:44.070713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-11-04 16:36:44.070723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-11-04 16:36:44.076379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.508 [2024-11-04 16:36:44.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-11-04 16:36:44.076412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.508 [2024-11-04 16:36:44.081916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.508 [2024-11-04 16:36:44.081940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-11-04 16:36:44.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.508 [2024-11-04 16:36:44.087108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.508 [2024-11-04 16:36:44.087136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-11-04 16:36:44.087145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.508 [2024-11-04 16:36:44.092992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.508 [2024-11-04 16:36:44.093016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-11-04 16:36:44.093025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-11-04 16:36:44.098799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.098823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.104392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.104415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.104424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.110011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.110033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.115593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.115623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.115632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.121114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.121137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.121147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.126569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.126606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.132049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.132072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.132085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.137592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.137620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.137629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.142478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.142501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.142510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.145534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.145557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.151082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.151103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.156293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.156315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.156324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.161657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.161678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.161687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.166927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.166957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.172179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.172201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.172210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.177488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.177513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.177522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.182883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.182906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.188235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.188257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.188266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.193562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.193584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.193593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.198916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.198938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.198947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.204328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.204352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.204361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.209834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.209860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.209869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.215371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.215394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.215403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.220947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.220971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.509 [2024-11-04 16:36:44.220980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.509 [2024-11-04 16:36:44.226049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.509 [2024-11-04 16:36:44.226072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.226081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.232170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.232194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.232203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.238268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.238292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.238302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.245995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.246020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.246031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.253551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.253576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.253587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.261337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.261364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.261374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.269296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.269322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.269334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.277696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.277721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.277732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.285096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.285123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.285138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.292998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.293024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.293035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.301175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.301202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.301214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.309425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.309451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.317542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.317567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.317579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.510 [2024-11-04 16:36:44.325547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.510 [2024-11-04 16:36:44.325573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.510 [2024-11-04 16:36:44.325584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.333518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.333544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.333554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.341396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.341422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.341432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.349516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.349552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.357533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.357562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.357572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.365483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.365507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.365517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.372869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.372891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.372902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.380069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.380092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.380102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.388578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.388608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.388619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.396055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.396079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.404305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.404328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.404337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.412956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.412981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.412990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.420373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.420396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.420405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.425899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.425923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.425933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.431913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.431936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.437586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.437614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.437623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.443224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.443247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.448819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.448842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.448851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.454420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.454443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.770 [2024-11-04 16:36:44.454453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.770 [2024-11-04 16:36:44.460157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.770 [2024-11-04 16:36:44.460180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.460189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.465792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.465814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.465823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.471367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.471393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.471402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.476931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.476952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.476961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.482522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.482544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.482553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.488025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.488047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.488056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.493521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.493543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.493552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.499084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.499106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.499115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.504618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.504639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.504648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.510003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.510024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.510032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.515321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.515342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.515350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.520857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.520879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.526298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.526322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.526330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.531751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.531773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.531782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.537132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.537154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.537163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.542400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.542422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.542430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.547744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.547767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.547776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.553054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.553076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.553084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.558536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.558558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.558567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.564063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.564085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.564097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.569622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.569644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.569653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.574868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.574891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.574900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.580184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.580207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.580216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.585568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.585591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.585608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.771 [2024-11-04 16:36:44.590976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:17.771 [2024-11-04 16:36:44.590999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.771 [2024-11-04 16:36:44.591009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.596489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.596520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.602062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.602084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.602093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.607493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.607515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.607525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.612802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.612828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.612837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.618119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.618141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.618150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.624578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.624604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.031 [2024-11-04 16:36:44.624613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.031 [2024-11-04 16:36:44.632001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.031 [2024-11-04 16:36:44.632023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.632031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.637305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.637327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.637334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.642363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.642384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.642392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.645381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.645403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.645412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.651139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.651161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.651171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.657240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.657263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.657272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.664295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.664320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.664329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.671686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.671710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.671719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.678940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.678963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.678972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.685213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.685235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.690970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.690991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.691000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.696894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.696915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.696923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.702585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.702621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.708206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.708228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.708237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.713932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.713953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.713965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.719550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.719573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.725319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.725341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.725349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.731265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.731286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.731294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.737071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.737093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.737101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.742640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.742661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.742669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.748343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.748366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.748374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.754178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.754199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.754207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.759482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.759504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.759515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.765115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.765137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.032 [2024-11-04 16:36:44.765145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.032 [2024-11-04 16:36:44.770656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.032 [2024-11-04 16:36:44.770678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.770688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.776478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.776500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.776510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.782268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.782290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.782298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.787912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.787935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.787943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.793473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.793506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.799031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.799054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.804610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.804632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.804640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.810169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.810191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.810203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.815579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.815606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.815615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.821010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.821033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.821042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.826809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.826832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.826840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.832363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.832385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.832394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.837871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.837893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.843310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.843332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.843340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.848710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.848740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.033 [2024-11-04 16:36:44.854365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.033 [2024-11-04 16:36:44.854387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.033 [2024-11-04 16:36:44.854395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.860059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.860085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.860093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.865839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.865861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.865870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.871361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.871383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.871392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.876877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.876899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.876908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.882588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.882626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.888192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.888214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.292 [2024-11-04 16:36:44.888223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.292 [2024-11-04 16:36:44.893764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.292 [2024-11-04 16:36:44.893787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.893796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.899372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.899395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.899404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.904935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.904958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.904967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.910999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.911022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.911031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.916861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.916883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.916891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.922932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.922954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.922965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.293 [2024-11-04 16:36:44.929346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.929368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.929377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.293 5283.00 IOPS, 660.38 MiB/s [2024-11-04T15:36:45.117Z] [2024-11-04 16:36:44.936520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e5570) 00:25:18.293 [2024-11-04 16:36:44.936542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.293 [2024-11-04 16:36:44.936551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.293 00:25:18.293 Latency(us) 00:25:18.293 [2024-11-04T15:36:45.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:18.293 nvme0n1 : 2.00 5280.31 660.04 0.00 0.00 3026.66 647.56 9674.36 00:25:18.293 [2024-11-04T15:36:45.117Z] =================================================================================================================== 00:25:18.293 [2024-11-04T15:36:45.117Z] Total : 5280.31 660.04 0.00 0.00 3026.66 647.56 9674.36 00:25:18.293 { 00:25:18.293 "results": [ 00:25:18.293 { 00:25:18.293 "job": "nvme0n1", 00:25:18.293 "core_mask": "0x2", 00:25:18.293 "workload": "randread", 00:25:18.293 "status": "finished", 00:25:18.293 "queue_depth": 16, 00:25:18.293 "io_size": 131072, 00:25:18.293 "runtime": 2.004049, 00:25:18.293 "iops": 5280.310012379937, 00:25:18.293 "mibps": 660.0387515474921, 00:25:18.293 "io_failed": 0, 00:25:18.293 "io_timeout": 0, 00:25:18.293 "avg_latency_us": 3026.6579719379715, 00:25:18.293 "min_latency_us": 647.5580952380952, 00:25:18.293 "max_latency_us": 9674.361904761905 00:25:18.293 } 00:25:18.293 ], 00:25:18.293 "core_count": 1 00:25:18.293 } 00:25:18.293 16:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:18.293 16:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:18.293 16:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:18.293 | .driver_specific 00:25:18.293 | .nvme_error 00:25:18.293 | .status_code 00:25:18.293 | .command_transient_transport_error' 00:25:18.293 16:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959093 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2959093 ']' 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2959093 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:18.551 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959093 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959093' 00:25:18.552 killing process with pid 2959093 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2959093 00:25:18.552 Received shutdown signal, test time was about 2.000000 seconds 00:25:18.552 00:25:18.552 Latency(us) 00:25:18.552 [2024-11-04T15:36:45.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.552 [2024-11-04T15:36:45.376Z] =================================================================================================================== 00:25:18.552 [2024-11-04T15:36:45.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2959093 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2959565 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2959565 /var/tmp/bperf.sock 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2959565 ']' 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.552 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.810 [2024-11-04 16:36:45.405199] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:18.810 [2024-11-04 16:36:45.405246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959565 ] 00:25:18.810 [2024-11-04 16:36:45.469671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.810 [2024-11-04 16:36:45.506874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.810 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.810 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:18.810 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.810 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.067 16:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.633 nvme0n1 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.633 16:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.633 Running I/O for 2 seconds... 00:25:19.633 [2024-11-04 16:36:46.330318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ee5c8 00:25:19.633 [2024-11-04 16:36:46.331089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.331121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.339574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ef6a8 00:25:19.633 [2024-11-04 16:36:46.340243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.340266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.348969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fac10 00:25:19.633 [2024-11-04 16:36:46.349847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.349869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.359301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8d30 00:25:19.633 [2024-11-04 16:36:46.360641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.367697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e01f8 00:25:19.633 [2024-11-04 16:36:46.368613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.368633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.376014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fda78 00:25:19.633 [2024-11-04 16:36:46.377205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.377224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.385452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebfd0 00:25:19.633 [2024-11-04 16:36:46.386781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.386801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.393818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fac10 00:25:19.633 [2024-11-04 16:36:46.394479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.394498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.403075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e0a68 00:25:19.633 [2024-11-04 16:36:46.403598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.403624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.412504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f5be8 00:25:19.633 [2024-11-04 16:36:46.413150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.413170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.421927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:19.633 [2024-11-04 16:36:46.422689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.422708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.430466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1868 00:25:19.633 [2024-11-04 16:36:46.431544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.431566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.439880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f31b8 00:25:19.633 [2024-11-04 16:36:46.441080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.441098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:19.633 [2024-11-04 16:36:46.449307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7970 00:25:19.633 [2024-11-04 16:36:46.450621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.633 [2024-11-04 16:36:46.450641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.459151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:19.892 [2024-11-04 16:36:46.460678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.460697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.468800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166df988 00:25:19.892 [2024-11-04 16:36:46.470357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.470375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.476630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0350 00:25:19.892 [2024-11-04 16:36:46.477621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.477639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.485893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e0ea0 00:25:19.892 [2024-11-04 16:36:46.487128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.487146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.493296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e01f8 00:25:19.892 [2024-11-04 16:36:46.493923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.493943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.501714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4f40 00:25:19.892 [2024-11-04 16:36:46.502417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.502436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.511123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f2948 00:25:19.892 [2024-11-04 16:36:46.511959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.511979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.520535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fb8b8 00:25:19.892 [2024-11-04 16:36:46.521501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.521520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.529937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e12d8 00:25:19.892 [2024-11-04 16:36:46.531004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.531023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.539331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc128 00:25:19.892 [2024-11-04 16:36:46.540526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.540544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.548762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f2948 00:25:19.892 [2024-11-04 16:36:46.550058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.550078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.558171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0350 00:25:19.892 [2024-11-04 16:36:46.559586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.559607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.567499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fcdd0 00:25:19.892 [2024-11-04 16:36:46.569033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:19.892 [2024-11-04 16:36:46.573830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1430 00:25:19.892 [2024-11-04 16:36:46.574540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.892 [2024-11-04 16:36:46.574559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.582967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e95a0 00:25:19.893 [2024-11-04 16:36:46.583595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.583618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.592219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebfd0 00:25:19.893 [2024-11-04 16:36:46.592870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.592889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.601527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fef90 00:25:19.893 [2024-11-04 16:36:46.602358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.602377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.611805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fe2e8 00:25:19.893 [2024-11-04 16:36:46.613106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.613125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.621232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166df118 00:25:19.893 [2024-11-04 16:36:46.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.622656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.630574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebb98 00:25:19.893 [2024-11-04 16:36:46.632112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.632131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.636907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4298 00:25:19.893 [2024-11-04 16:36:46.637614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.637633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.646064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ea248 00:25:19.893 [2024-11-04 16:36:46.646694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.646713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.655355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7100 00:25:19.893 [2024-11-04 16:36:46.656099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.656118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.664756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f5be8 00:25:19.893 [2024-11-04 16:36:46.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.665634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.674127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebfd0 00:25:19.893 [2024-11-04 16:36:46.675192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.675211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.683542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ddc00 00:25:19.893 [2024-11-04 16:36:46.684727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.684746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.692071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebb98 00:25:19.893 [2024-11-04 16:36:46.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.693263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.701530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fe720 00:25:19.893 [2024-11-04 16:36:46.702862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.702881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:19.893 [2024-11-04 16:36:46.711076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f9b30 00:25:19.893 [2024-11-04 16:36:46.712512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.893 [2024-11-04 16:36:46.712532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.719840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fb048 00:25:20.152 [2024-11-04 16:36:46.720916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.729153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e49b0 00:25:20.152 [2024-11-04 16:36:46.730371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.730390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.737130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4f40 00:25:20.152 [2024-11-04 16:36:46.738388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.738407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.744871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0788 00:25:20.152 [2024-11-04 16:36:46.745554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.745573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.755580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ff3c8 00:25:20.152 [2024-11-04 16:36:46.756736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.764977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8088 00:25:20.152 [2024-11-04 16:36:46.766244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.766268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.774377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e99d8 00:25:20.152 [2024-11-04 16:36:46.775783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.775801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.782702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f20d8 00:25:20.152 [2024-11-04 16:36:46.783649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.783668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.790974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e84c0 00:25:20.152 [2024-11-04 16:36:46.791996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.792015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.800360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:20.152 [2024-11-04 16:36:46.801500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.801519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.809754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f3e60 00:25:20.152 [2024-11-04 16:36:46.811016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.811035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.819155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e3d08 00:25:20.152 [2024-11-04 16:36:46.820540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.820560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.828556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166df550 00:25:20.152 [2024-11-04 16:36:46.830055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.830079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.834894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4f40 00:25:20.152 [2024-11-04 16:36:46.835611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.835630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.843650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8e88 00:25:20.152 [2024-11-04 16:36:46.844348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.844367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.853089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dece0 00:25:20.152 [2024-11-04 16:36:46.853890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.862484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166df118 00:25:20.152 [2024-11-04 16:36:46.863393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.863411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.871891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9e10 00:25:20.152 [2024-11-04 16:36:46.872911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.872929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.881305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0ff8 00:25:20.152 [2024-11-04 16:36:46.882452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.152 [2024-11-04 16:36:46.882472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:20.152 [2024-11-04 16:36:46.890742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dece0 00:25:20.153 [2024-11-04 16:36:46.892003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.892022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.899064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e38d0 00:25:20.153 [2024-11-04 16:36:46.899895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.899919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.907371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ec840 00:25:20.153 [2024-11-04 16:36:46.908196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.908215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.918162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ec840 00:25:20.153 [2024-11-04 16:36:46.919449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.919468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.925924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8618 00:25:20.153 [2024-11-04 16:36:46.926737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.926756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.934866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4298 00:25:20.153 [2024-11-04 16:36:46.935683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.935702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.943491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fb048 00:25:20.153 [2024-11-04 16:36:46.944307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.944325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.953065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0788 00:25:20.153 [2024-11-04 16:36:46.953985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.954004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.962465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ec840 00:25:20.153 [2024-11-04 16:36:46.963501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.963521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:20.153 [2024-11-04 16:36:46.970241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dfdc0 00:25:20.153 [2024-11-04 16:36:46.970808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.153 [2024-11-04 16:36:46.970826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:20.411 [2024-11-04 16:36:46.979617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc998 00:25:20.411 [2024-11-04 16:36:46.980234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:46.980256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:46.988816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e6738 00:25:20.412 [2024-11-04 16:36:46.989366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:46.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:46.999699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:20.412 [2024-11-04 16:36:47.000723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.000743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.008003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:20.412 [2024-11-04 16:36:47.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.009030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.017404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f96f8 00:25:20.412 [2024-11-04 16:36:47.018530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.018550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.026807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e4de8 00:25:20.412 [2024-11-04 16:36:47.028051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.028071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.036229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eb328 00:25:20.412 [2024-11-04 16:36:47.037586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.037610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.045820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166df550 00:25:20.412 [2024-11-04 16:36:47.047304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.047324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.053627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ed920 00:25:20.412 [2024-11-04 16:36:47.054669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.054688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.062802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e5658 00:25:20.412 [2024-11-04 16:36:47.063822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.063842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.071177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc998 00:25:20.412 [2024-11-04 16:36:47.072169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.080512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f3e60 00:25:20.412 [2024-11-04 16:36:47.081628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.081647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.089937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eaab8 00:25:20.412 [2024-11-04 16:36:47.091167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.091186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.099635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0350 00:25:20.412 [2024-11-04 16:36:47.100983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.101003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.109063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f96f8 00:25:20.412 [2024-11-04 16:36:47.110528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.110548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.115397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e2c28 00:25:20.412 [2024-11-04 16:36:47.116039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.116058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.125627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ebb98 00:25:20.412 [2024-11-04 16:36:47.126724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.135024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f9b30 00:25:20.412 [2024-11-04 16:36:47.136237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.144428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dfdc0 00:25:20.412 [2024-11-04 16:36:47.145765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.145784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.153762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8088 00:25:20.412 [2024-11-04 16:36:47.155212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.155232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.160099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7970 00:25:20.412 [2024-11-04 16:36:47.160813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.160833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.170417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8e88 00:25:20.412 [2024-11-04 16:36:47.171190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.171209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.178832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e7818 00:25:20.412 [2024-11-04 16:36:47.180126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.180145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.186574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e0630 00:25:20.412 [2024-11-04 16:36:47.187316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.187335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.196038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eee38 00:25:20.412 [2024-11-04 16:36:47.196861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.196879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.205450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fe2e8 00:25:20.412 [2024-11-04 16:36:47.206310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.412 [2024-11-04 16:36:47.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:20.412 [2024-11-04 16:36:47.214870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ec840 00:25:20.413 [2024-11-04 16:36:47.215836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.413 [2024-11-04 16:36:47.215858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:20.413 [2024-11-04 16:36:47.224257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0ff8 00:25:20.413 [2024-11-04 16:36:47.225431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.413 [2024-11-04 16:36:47.225450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:20.413 [2024-11-04 16:36:47.233835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eee38 00:25:20.413 [2024-11-04 16:36:47.235273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.413 [2024-11-04 16:36:47.235293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:20.671 [2024-11-04 16:36:47.243696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f2d80 00:25:20.671 [2024-11-04 16:36:47.245110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-11-04 16:36:47.245128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:20.671 [2024-11-04 16:36:47.253127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9e10 00:25:20.671 [2024-11-04 16:36:47.254568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-11-04 16:36:47.254588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.671 [2024-11-04 16:36:47.260911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1868 00:25:20.671 [2024-11-04 16:36:47.261882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-11-04 16:36:47.261901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:20.671 [2024-11-04 16:36:47.270170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8e88 00:25:20.671 [2024-11-04 16:36:47.271388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.671 [2024-11-04 16:36:47.271407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:20.671 [2024-11-04 16:36:47.276913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e23b8 00:25:20.671 [2024-11-04 16:36:47.277604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.277623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.288078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e2c28 00:25:20.672 [2024-11-04 16:36:47.289271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.297495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f2510 00:25:20.672 [2024-11-04 16:36:47.298806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.298825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.306936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fcdd0 00:25:20.672 [2024-11-04 16:36:47.308354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.308373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.313281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e23b8 00:25:20.672 [2024-11-04 16:36:47.313947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.313966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:20.672 28058.00 IOPS, 109.60 MiB/s [2024-11-04T15:36:47.496Z] [2024-11-04 16:36:47.323526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8088 00:25:20.672 [2024-11-04 16:36:47.324649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.324668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.332934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f46d0 00:25:20.672 [2024-11-04 16:36:47.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.334241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.342352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4f40 00:25:20.672 [2024-11-04 16:36:47.343732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.343751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.352004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e6300 00:25:20.672 [2024-11-04 16:36:47.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.353518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.358339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1ca0 00:25:20.672 [2024-11-04 16:36:47.359019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.359038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.368082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e88f8 00:25:20.672 [2024-11-04 16:36:47.369334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.369352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.375831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f81e0 00:25:20.672 [2024-11-04 16:36:47.376492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.376511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.385261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7538 00:25:20.672 [2024-11-04 16:36:47.386025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.386043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.394709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f6890 00:25:20.672 [2024-11-04 16:36:47.395604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.395622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.404718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fb048 00:25:20.672 [2024-11-04 16:36:47.405753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.414017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc998 00:25:20.672 [2024-11-04 16:36:47.415161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.415180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.422975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fda78 00:25:20.672 [2024-11-04 16:36:47.424230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.432385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9e10 00:25:20.672 [2024-11-04 16:36:47.433739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.433757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.440738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166de470 00:25:20.672 [2024-11-04 16:36:47.441754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.441772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.448946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f96f8 00:25:20.672 [2024-11-04 16:36:47.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.450191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.457261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e23b8 00:25:20.672 [2024-11-04 16:36:47.457944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.457963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.466261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7da8 00:25:20.672 [2024-11-04 16:36:47.466959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.466978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.475272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e7c50 00:25:20.672 [2024-11-04 16:36:47.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.475995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.484370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166edd58 00:25:20.672 [2024-11-04 16:36:47.485065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.485083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.672 [2024-11-04 16:36:47.493532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7970 00:25:20.672 [2024-11-04 16:36:47.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.672 [2024-11-04 16:36:47.494306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.502890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f31b8 00:25:20.931 [2024-11-04 16:36:47.503572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.503590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.511920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1430 00:25:20.931 [2024-11-04 16:36:47.512650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.512669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.521260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ef6a8 00:25:20.931 [2024-11-04 16:36:47.521952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.521972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.530251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dece0 00:25:20.931 [2024-11-04 16:36:47.530959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.530977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.539259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0788 00:25:20.931 [2024-11-04 16:36:47.539962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.539981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.548160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fd640 00:25:20.931 [2024-11-04 16:36:47.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.548871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.557157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e1b48 00:25:20.931 [2024-11-04 16:36:47.557859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.557878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.566156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fbcf0 00:25:20.931 [2024-11-04 16:36:47.566847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.566866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.575124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f6cc8 00:25:20.931 [2024-11-04 16:36:47.575819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.575837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.584132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fdeb0 00:25:20.931 [2024-11-04 16:36:47.584833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.584851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.593123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9168 00:25:20.931 [2024-11-04 16:36:47.593819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.593837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.602332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4298 00:25:20.931 [2024-11-04 16:36:47.603031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.603050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.611322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e0630 00:25:20.931 [2024-11-04 16:36:47.612016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.612035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.931 [2024-11-04 16:36:47.620305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fe2e8 00:25:20.931 [2024-11-04 16:36:47.620996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.931 [2024-11-04 16:36:47.621014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.629280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8088 00:25:20.932 [2024-11-04 16:36:47.629973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.629991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.638275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e5a90 00:25:20.932 [2024-11-04 16:36:47.638973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.638992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.647244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eb328 00:25:20.932 [2024-11-04 16:36:47.647938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.656228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ef270 00:25:20.932 [2024-11-04 16:36:47.656924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.656942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.665221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e4578 00:25:20.932 [2024-11-04 16:36:47.665920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.665940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.674100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f96f8 00:25:20.932 [2024-11-04 16:36:47.674797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.674815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.683111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ff3c8 00:25:20.932 [2024-11-04 16:36:47.683803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.683825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.692135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ecc78 00:25:20.932 [2024-11-04 16:36:47.692831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.692850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.701122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fda78 00:25:20.932 [2024-11-04 16:36:47.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.701840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.710105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ea248 00:25:20.932 [2024-11-04 16:36:47.710804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.710823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.719076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f9f68 00:25:20.932 [2024-11-04 16:36:47.719772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.719791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.728067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eea00 00:25:20.932 [2024-11-04 16:36:47.728768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.728786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.737059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eaef0 00:25:20.932 [2024-11-04 16:36:47.737752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.737771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:20.932 [2024-11-04 16:36:47.746040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8d30 00:25:20.932 [2024-11-04 16:36:47.746733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.932 [2024-11-04 16:36:47.746751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.190 [2024-11-04 16:36:47.755379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e23b8 00:25:21.190 [2024-11-04 16:36:47.756139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.190 [2024-11-04 16:36:47.756158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.764619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7da8 00:25:21.191 [2024-11-04 16:36:47.765320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.765339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.773620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e7c50 00:25:21.191 [2024-11-04 16:36:47.774312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.774330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.782617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166edd58 00:25:21.191 [2024-11-04 16:36:47.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.783332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.791644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7970 00:25:21.191 [2024-11-04 16:36:47.792337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.792355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.800628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f31b8 00:25:21.191 [2024-11-04 16:36:47.801313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.801331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.809631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f1430 00:25:21.191 [2024-11-04 16:36:47.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.810340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.818625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ef6a8 00:25:21.191 [2024-11-04 16:36:47.819322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.819340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.827609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dece0 00:25:21.191 [2024-11-04 16:36:47.828296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.828314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.836608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f0788 00:25:21.191 [2024-11-04 16:36:47.837299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.837318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.845614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fd640 00:25:21.191 [2024-11-04 16:36:47.846306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.846325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.854854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e1b48 00:25:21.191 [2024-11-04 16:36:47.855542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.855561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.863841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fbcf0 00:25:21.191 [2024-11-04 16:36:47.864531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.864550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.872823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f6cc8 00:25:21.191 [2024-11-04 16:36:47.873513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.873532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.881826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fdeb0 00:25:21.191 [2024-11-04 16:36:47.882512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.882530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.890722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9168 00:25:21.191 [2024-11-04 16:36:47.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.891434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.899719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4298 00:25:21.191 [2024-11-04 16:36:47.900414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.900434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.908713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e0630 00:25:21.191 [2024-11-04 16:36:47.909378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.909396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.917784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fe2e8 00:25:21.191 [2024-11-04 16:36:47.918447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.918469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.926784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8088 00:25:21.191 [2024-11-04 16:36:47.927481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.927499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.935796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e5a90 00:25:21.191 [2024-11-04 16:36:47.936487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.936505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.944783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eb328 00:25:21.191 [2024-11-04 16:36:47.945471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.945489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.953783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ef270 00:25:21.191 [2024-11-04 16:36:47.954471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.954489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.962770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e4578 00:25:21.191 [2024-11-04 16:36:47.963459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.963478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.971760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f96f8 00:25:21.191 [2024-11-04 16:36:47.972456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.972474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.980759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ff3c8 00:25:21.191 [2024-11-04 16:36:47.981420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.989721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ecc78 00:25:21.191 [2024-11-04 16:36:47.990408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.990426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.191 [2024-11-04 16:36:47.998704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fda78 00:25:21.191 [2024-11-04 16:36:47.999393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.191 [2024-11-04 16:36:47.999414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.192 [2024-11-04 16:36:48.007693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ea248 00:25:21.192 [2024-11-04 16:36:48.008379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.192 [2024-11-04 16:36:48.008397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.017043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f9f68 00:25:21.450 [2024-11-04 16:36:48.017767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.017787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.026218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eea00 00:25:21.450 [2024-11-04 16:36:48.026915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.026934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.035278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eaef0 00:25:21.450 [2024-11-04 16:36:48.035978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.035997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.044280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e8d30 00:25:21.450 [2024-11-04 16:36:48.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.044989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.053269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e23b8 00:25:21.450 [2024-11-04 16:36:48.053941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.053960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.062465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f7da8 00:25:21.450 [2024-11-04 16:36:48.063145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.063164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.071466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e7c50 00:25:21.450 [2024-11-04 16:36:48.072145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.072165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.079899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dfdc0 00:25:21.450 [2024-11-04 16:36:48.080488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.450 [2024-11-04 16:36:48.080507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:21.450 [2024-11-04 16:36:48.090670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166dfdc0 00:25:21.450 [2024-11-04 16:36:48.091715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.091734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.099611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ee5c8 00:25:21.451 [2024-11-04 16:36:48.100719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.100738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.109287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8e88 00:25:21.451 [2024-11-04 16:36:48.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.110544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.117628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fb8b8 00:25:21.451 [2024-11-04 16:36:48.118514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.118533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.126469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e38d0 00:25:21.451 [2024-11-04 16:36:48.127267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.135729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e2c28 00:25:21.451 [2024-11-04 16:36:48.136768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.136787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.145968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ff3c8 00:25:21.451 [2024-11-04 16:36:48.147436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.147455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.152298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166ddc00 00:25:21.451 [2024-11-04 16:36:48.152964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.152984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.160803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc560 00:25:21.451 [2024-11-04 16:36:48.161424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.161442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.170227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f5be8 00:25:21.451 [2024-11-04 16:36:48.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.171010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.180267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166eee38 00:25:21.451 [2024-11-04 16:36:48.181064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.181084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.189535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4298 00:25:21.451 [2024-11-04 16:36:48.190448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.190467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.198942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fdeb0 00:25:21.451 [2024-11-04 16:36:48.200100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.200119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.206934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e73e0 00:25:21.451 [2024-11-04 16:36:48.208147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.208166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.214657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fc560 00:25:21.451 [2024-11-04 16:36:48.215278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.215297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.224080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166fbcf0 00:25:21.451 [2024-11-04 16:36:48.224829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.224847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.233469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8a50 00:25:21.451 [2024-11-04 16:36:48.234337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.234358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.242898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e4578 00:25:21.451 [2024-11-04 16:36:48.243863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.243882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.252300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f4b08 00:25:21.451 [2024-11-04 16:36:48.253398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.253418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.261713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e9168 00:25:21.451 [2024-11-04 16:36:48.262930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.262949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:21.451 [2024-11-04 16:36:48.271087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f8e88 00:25:21.451 [2024-11-04 16:36:48.272509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.451 [2024-11-04 16:36:48.272527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:21.710 [2024-11-04 16:36:48.280991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e27f0 00:25:21.710 [2024-11-04 16:36:48.282470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.710 [2024-11-04 16:36:48.282489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:21.710 [2024-11-04 16:36:48.287328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e5658 00:25:21.710 [2024-11-04 16:36:48.287965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.710 [2024-11-04 16:36:48.287984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:21.710 [2024-11-04 16:36:48.295844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f6cc8 00:25:21.710 [2024-11-04 16:36:48.296464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.710 [2024-11-04 16:36:48.296483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:21.710 [2024-11-04 16:36:48.305269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166e1710 00:25:21.710 [2024-11-04 16:36:48.305992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.710 [2024-11-04 16:36:48.306011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:21.710 [2024-11-04 16:36:48.314699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd3500) with pdu=0x2000166f2d80 00:25:21.710 [2024-11-04 16:36:48.315563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.710 [2024-11-04 16:36:48.315584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:21.710 28202.00 IOPS, 110.16 MiB/s 00:25:21.710 Latency(us) 00:25:21.710 [2024-11-04T15:36:48.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:21.710 nvme0n1 : 2.00 28221.15 110.24 0.00 0.00 4529.88 2231.34 13856.18 00:25:21.710 [2024-11-04T15:36:48.534Z] =================================================================================================================== 00:25:21.710 [2024-11-04T15:36:48.534Z] Total : 28221.15 110.24 0.00 0.00 4529.88 2231.34 13856.18 00:25:21.710 { 00:25:21.710 "results": [ 00:25:21.710 { 00:25:21.710 "job": "nvme0n1", 00:25:21.710 "core_mask": "0x2", 00:25:21.710 "workload": "randwrite", 00:25:21.710 "status": "finished", 00:25:21.710 "queue_depth": 128, 00:25:21.710 "io_size": 4096, 00:25:21.710 "runtime": 2.004844, 00:25:21.710 "iops": 28221.148378626964, 00:25:21.710 "mibps": 110.23886085401158, 00:25:21.710 "io_failed": 0, 00:25:21.710 "io_timeout": 0, 00:25:21.710 "avg_latency_us": 4529.879341737933, 00:25:21.710 "min_latency_us": 2231.344761904762, 00:25:21.710 "max_latency_us": 13856.182857142858 00:25:21.710 } 00:25:21.710 ], 00:25:21.710 "core_count": 1 00:25:21.710 } 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.710 | .driver_specific 00:25:21.710 | .nvme_error 00:25:21.710 | .status_code 00:25:21.710 | .command_transient_transport_error' 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:25:21.710 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2959565 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2959565 ']' 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2959565 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959565 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959565' 00:25:21.968 killing process with pid 2959565 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2959565 00:25:21.968 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.968 00:25:21.968 Latency(us) 00:25:21.968 [2024-11-04T15:36:48.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.968 [2024-11-04T15:36:48.792Z] =================================================================================================================== 00:25:21.968 [2024-11-04T15:36:48.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2959565 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:21.968 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2960252 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2960252 /var/tmp/bperf.sock 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2960252 ']' 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.969 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.969 [2024-11-04 16:36:48.774402] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:21.969 [2024-11-04 16:36:48.774447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960252 ] 00:25:21.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.969 Zero copy mechanism will not be used. 00:25:22.227 [2024-11-04 16:36:48.836919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.227 [2024-11-04 16:36:48.879683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.227 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.227 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:22.227 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.227 16:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.484 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.743 nvme0n1 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:22.743 16:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.743 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:22.743 Zero copy mechanism will not be used. 00:25:22.743 Running I/O for 2 seconds... 00:25:22.743 [2024-11-04 16:36:49.548908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:22.743 [2024-11-04 16:36:49.549171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.743 [2024-11-04 16:36:49.549198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.743 [2024-11-04 16:36:49.553751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:22.743 [2024-11-04 16:36:49.554009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.743 [2024-11-04 16:36:49.554035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.743 [2024-11-04 16:36:49.558369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:22.743 [2024-11-04 16:36:49.558630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.743 [2024-11-04 16:36:49.558652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.743 [2024-11-04 16:36:49.562943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:22.743 [2024-11-04 16:36:49.563197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.743 [2024-11-04 16:36:49.563219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.567669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.567931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.567952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.572342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.572595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.572623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.576853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.577129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.581265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.581514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.581535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.585894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.586141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.590378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.590634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.590655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.595090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.595357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.595378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.600112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.600372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.600394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.605202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.605261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.605280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.610920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.611173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.616456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.616724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.622013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.622265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.622285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.627228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.627473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.627494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.632035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.632281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.632301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.636681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.636934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.636955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.641421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.002 [2024-11-04 16:36:49.641674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.002 [2024-11-04 16:36:49.641694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.002 [2024-11-04 16:36:49.646029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.646277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.646298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.650599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.650855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.650876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.655276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.655523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.655543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.659816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.660067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.660088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.664357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.664614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.664636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.669273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.669523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.669544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.674291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.674551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.674572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.680185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.680446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.680467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.686073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.686321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.686341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.691288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.691534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.691555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.696162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.696421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.701217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.701479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.701500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.706237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.706484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.706509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.711179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.711428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.711448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.715820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.716072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.720414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.720671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.720692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.725168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.725416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.725436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.729725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.729983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.730004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.734381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.734636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.734657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.739096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.739359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.739380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.743680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.743966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.748476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.748769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.753217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.753472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.753493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.757705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.757955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.757976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.762240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.003 [2024-11-04 16:36:49.762490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.003 [2024-11-04 16:36:49.762512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.003 [2024-11-04 16:36:49.766949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.767199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.767220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.771962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.772214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.772235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.777918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.778098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.783448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.783695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.783717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.788366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.788614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.788635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.793140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.793372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.793392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.798020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.798255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.798277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.802582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.802857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.807218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.807464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.807485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.811939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.812185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.812206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.816987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.817219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.817240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.004 [2024-11-04 16:36:49.821394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.004 [2024-11-04 16:36:49.821640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.004 [2024-11-04 16:36:49.821661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.825947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.826191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.826212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.830383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.830628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.830656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.834990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.835252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.839639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.839889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.839910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.844525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.844775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.844797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.850018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.850265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.855627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.855875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.855896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.860941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.861173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.861194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.866232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.866465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.866494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.871755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.872001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.872022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.877549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.877802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.877823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.882799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.883037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.888214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.888449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.888469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.892836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.893076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.893096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.897322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.897576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.901757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.901995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.902016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.906143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.906375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.906396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.910501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.910738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.910759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.915133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.915368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.915388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.920009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.920269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.924291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.924526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.924547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.928661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.928896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.928916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.933035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.933289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.937355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.937595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.937624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.941704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.941943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.941963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.945997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.946231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.946251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.950327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.950570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.950592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.954670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.954912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.954936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.959020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.959255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.264 [2024-11-04 16:36:49.963346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.264 [2024-11-04 16:36:49.963581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.264 [2024-11-04 16:36:49.963607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.967659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.967896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.967916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.971955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.972196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.972216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.976267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.976529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.980527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.980766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.980787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.984936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.985171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.985192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.989194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.989430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.989450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.993492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.993756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:49.997852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:49.998093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:49.998114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.002231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.002496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.002519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.006814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.007055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.007076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.011381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.011824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.011880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.017308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.017545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.017567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.021671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.021912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.021934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.026030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.026267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.026288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.030434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.030675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.030696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.035138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.035419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.035442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.040840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.041085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.041106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.045285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.045530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.045552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.050572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.050870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.056823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.057107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.057128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.063001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.063283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.063305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.068802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.069047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.069069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.074192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.074461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.079397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.265 [2024-11-04 16:36:50.079644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.265 [2024-11-04 16:36:50.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.265 [2024-11-04 16:36:50.083908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.266 [2024-11-04 16:36:50.084133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.266 [2024-11-04 16:36:50.084155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.088903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.089186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.089208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.095245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.095475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.095498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.099825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.100048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.104691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.104915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.104936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.109120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.109347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.109366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.113486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.113718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.113737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.117881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.118128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.122240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.122475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.122496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.126581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.126815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.126837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.130914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.131136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.131157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.135247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.135480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.139610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.139841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.139862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.143933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.144159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.144180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.148274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.148497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.148519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.152700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.152933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.152955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.157433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.157666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.157686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.161855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.162083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.162105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.166223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.166483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.170560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.170794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.170815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.174945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.175170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.175189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.179287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.179517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.183617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.183840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.183860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.187921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.188144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.188164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.192241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.192471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.192492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.196563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.196800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.196826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.200865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.201087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.201108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.526 [2024-11-04 16:36:50.205314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.526 [2024-11-04 16:36:50.205540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.526 [2024-11-04 16:36:50.205560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.210466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.210697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.210716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.216267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.216490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.216511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.221032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.221254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.221275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.225545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.225772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.225793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.230091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.230311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.230332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.234685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.234930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.239386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.239615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.239635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.243916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.244137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.244157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.249096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.249320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.249340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.253628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.253851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.253872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.258001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.258227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.258248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.262554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.262780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.262801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.267242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.267474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.267494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.272690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.272916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.272937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.277892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.278116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.278140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.282421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.282651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.282670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.287104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.287348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.291671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.291895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.291916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.295947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.296172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.296192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.300227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.300452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.300472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.304529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.304763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.304784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.308821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.309045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.309064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.313320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.313543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.313564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.317686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.317915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.527 [2024-11-04 16:36:50.317936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.527 [2024-11-04 16:36:50.322447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.527 [2024-11-04 16:36:50.322695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.528 [2024-11-04 16:36:50.322716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.528 [2024-11-04 16:36:50.327487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.528 [2024-11-04 16:36:50.327752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.528 [2024-11-04 16:36:50.327773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.528 [2024-11-04 16:36:50.333495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.528 [2024-11-04 16:36:50.333774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.528 [2024-11-04 16:36:50.333795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.528 [2024-11-04 16:36:50.338950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.528 [2024-11-04 16:36:50.339237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.528 [2024-11-04 16:36:50.339258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.528 [2024-11-04 16:36:50.344140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.528 [2024-11-04 16:36:50.344364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.528 [2024-11-04 16:36:50.344386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.787 [2024-11-04 16:36:50.348986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.787 [2024-11-04 16:36:50.349223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.787 [2024-11-04 16:36:50.349243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.787 [2024-11-04 16:36:50.354636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.787 [2024-11-04 16:36:50.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.787 [2024-11-04 16:36:50.354939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.787 [2024-11-04 16:36:50.361202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.787 [2024-11-04 16:36:50.361428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.787 [2024-11-04 16:36:50.361448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.787 [2024-11-04 16:36:50.367959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.368259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.368280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.374592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.374842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.374863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.379794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.380013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.380033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.384446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.384676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.384697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.388951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.389173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.389194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.393313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.393536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.393557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.397666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.397888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.397909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.402018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.402241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.406371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.406598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.410831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.411054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.411076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.415749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.415976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.415996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.421199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.421426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.421446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.426133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.426357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.426377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.430794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.435451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.435678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.435698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.440050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.440278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.440298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.444632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.444868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.444889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.449295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.449523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.449543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.453878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.454103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.454124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.458462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.458691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.458712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.463131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.463354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.467870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.468094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.468115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.472717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.472940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.472960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.477413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.477645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.477665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.482061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.482292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.482313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.487667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.487984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.488005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.493909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.494183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.494204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.500333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.500669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.500689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.507218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.507523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.507547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.512889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.513115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.513136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.517211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.517433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.517455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.521487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.521713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.521734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.525933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.526156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.526178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.788 [2024-11-04 16:36:50.530172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.788 [2024-11-04 16:36:50.530396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.788 [2024-11-04 16:36:50.530416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.534447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.534674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.534699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.538699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.538926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.538947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.542966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.544110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.544132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.789 6446.00 IOPS, 805.75 MiB/s [2024-11-04T15:36:50.613Z] [2024-11-04 16:36:50.548679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.548794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.548812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.554144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.554301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.560428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.560564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.560585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.567560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.567699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.567721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.574194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.574300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.574320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.580870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.581064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.581084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.587398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.587541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.587560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.593768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.593914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.593933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.600842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.601019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.601037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.789 [2024-11-04 16:36:50.607338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:23.789 [2024-11-04 16:36:50.607510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.789 [2024-11-04 16:36:50.607529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.613963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.614089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.614108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.620863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.620994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.621012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.627458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.627604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.627623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.633918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.634046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.634065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.640796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.640947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.640965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.647513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.647689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.647708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.654425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.654499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.654518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.661564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.661709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.661728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.668916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.669047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.669066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.676241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.676408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.676427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.683277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.049 [2024-11-04 16:36:50.683428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.049 [2024-11-04 16:36:50.683447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.049 [2024-11-04 16:36:50.690225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.690385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.690403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.696941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.697024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.697043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.702027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.702114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.706937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.707013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.707031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.711928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.711988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.712007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.716802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.716867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.716885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.721594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.721661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.721680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.726304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.726371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.726388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.731269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.731326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.731344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.735820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.735876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.735894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.740351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.740417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.740436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.744835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.744896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.744914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.749323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.749390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.749408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.753792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.753849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.753868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.758275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.758333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.758351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.762773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.762828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.762846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.767227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.767283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.767301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.771725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.771781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.771800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.776211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.776265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.776283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.780652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.780724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.780742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.785312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.785376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.785394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.790197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.790265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.790283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.794959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.795014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.795032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.800213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.800267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.800285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.805891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.050 [2024-11-04 16:36:50.805946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.050 [2024-11-04 16:36:50.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.050 [2024-11-04 16:36:50.811416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.811477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.811495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.816839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.816896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.816916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.822831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.822902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.822920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.828006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.828078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.832729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.832788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.832806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.837278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.837344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.837362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.842156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.842222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.842240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.847374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.847438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.852964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.853022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.853041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.858554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.858629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.858647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.864451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.864505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.864523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.051 [2024-11-04 16:36:50.869574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.051 [2024-11-04 16:36:50.869643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.051 [2024-11-04 16:36:50.869662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.874633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.874705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.874723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.879326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.879385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.879403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.884172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.884228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.884247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.889143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.889216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.889234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.894065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.894127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.894144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.898567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.898632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.898651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.903347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.903404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.903422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.908541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.908609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.908627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.914360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.914432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.914450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.919365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.919426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.919444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.924169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.924255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.924273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.929012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.929070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.929087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.933611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.933666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.933684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.938176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.938233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.938251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.942708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.942778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.311 [2024-11-04 16:36:50.942796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.311 [2024-11-04 16:36:50.947260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.311 [2024-11-04 16:36:50.947327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.947345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.951747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.951802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.956190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.956250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.956271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.960677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.960741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.960760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.965140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.965194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.965212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.969639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.969696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.969714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.974069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.974124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.974143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.978542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.978622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.978641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.982978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.983044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.983062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.987416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.987473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.987492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.991869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.991942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.991960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:50.996362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:50.996420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:50.996442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.000825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.000883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.000901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.005262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.005354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.009676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.009748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.009766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.014097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.014153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.014171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.018504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.018560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.018577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.022930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.022986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.023004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.027406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.027468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.027486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.031891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.031947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.031965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.036318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.036386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.036404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.040911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.040969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.040988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.045325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.045387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.049736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.049806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.049824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.054172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.054231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-11-04 16:36:51.054250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-11-04 16:36:51.058614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.312 [2024-11-04 16:36:51.058671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.058689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.063229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.063297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.063316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.067877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.067942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.067961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.073424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.073540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.073558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.078313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.078371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.082755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.082828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.082846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.087167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.087224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.087243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.091592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.091658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.091677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.096039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.096156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.101138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.101204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.106067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.106151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.106172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.111460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.111516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.111534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.115972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.116043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.116064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.120468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.120539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.124977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.125033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.125051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-11-04 16:36:51.129649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.313 [2024-11-04 16:36:51.129737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-11-04 16:36:51.129757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.134956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.135026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.135044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.139664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.139740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.139758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.144678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.144744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.144763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.149964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.150022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.150040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.156233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.156338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.156356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.161455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.161558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.167139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.167193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.167211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.573 [2024-11-04 16:36:51.172459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.573 [2024-11-04 16:36:51.172536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.573 [2024-11-04 16:36:51.172554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.177756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.177820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.177838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.182958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.183033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.183052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.187522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.187594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.187620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.192068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.192142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.192160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.197008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.197068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.197085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.201900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.201973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.201991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.206706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.206763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.206782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.211502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.211562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.211582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.216428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.216501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.216520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.221301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.221358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.221376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.226217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.226287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.226305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.231079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.231157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.231175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.235964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.236047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.240671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.245805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.245901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.245923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.250644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.250715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.250734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.255529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.255586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.255613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.260135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.260199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.260217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.264575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.264645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.264665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.269031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.269089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.269107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.273421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.273478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.277886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.277939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.277957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.282265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.282321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.282339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.286591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.286662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.574 [2024-11-04 16:36:51.286680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.574 [2024-11-04 16:36:51.290922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.574 [2024-11-04 16:36:51.290979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.290997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.295195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.295250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.295268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.299551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.299647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.299665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.304607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.304664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.304683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.309016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.309073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.309092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.313282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.313337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.313356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.317939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.318017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.318036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.322750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.322829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.322847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.327652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.327708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.327727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.331943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.332006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.336374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.336427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.336445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.340669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.340726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.340744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.344912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.344969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.344987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.349187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.349242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.349261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.353998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.354073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.354092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.358472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.358529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.358547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.363578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.363646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.363668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.368863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.368920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.368939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.374777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.374838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.374856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.380219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.380287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.380306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.386065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.386120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.386138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.575 [2024-11-04 16:36:51.391671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.575 [2024-11-04 16:36:51.391733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.575 [2024-11-04 16:36:51.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.397511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.397570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.397588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.403550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.403616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.403635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.408980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.409065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.409083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.414782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.414849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.414868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.420148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.420209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.420227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.425219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.835 [2024-11-04 16:36:51.425277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.835 [2024-11-04 16:36:51.425295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.835 [2024-11-04 16:36:51.430086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.430139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.430157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.435657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.435738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.435756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.441312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.441380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.441398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.446375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.446441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.446459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.450938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.450996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.455469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.455551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.455569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.460304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.460378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.460397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.465221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.465293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.465313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.470219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.470294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.475309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.475382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.475400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.479894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.479949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.479968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.484292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.484356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.484374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.488627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.488695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.488713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.492980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.493048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.493067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.497342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.497405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.497427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.501669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.501742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.505973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.506029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.506048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.510333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.510386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.510404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.514695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.514759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.514777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.519455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.519527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.519545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.524205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.524278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.524296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.528696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.528756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.528774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.533094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.533161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.533180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.537469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.537535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.537554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.836 [2024-11-04 16:36:51.542754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.836 [2024-11-04 16:36:51.542833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.836 [2024-11-04 16:36:51.542851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.836 6315.50 IOPS, 789.44 MiB/s [2024-11-04T15:36:51.661Z] [2024-11-04 16:36:51.548464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcd39e0) with pdu=0x2000166fef90 00:25:24.837 [2024-11-04 16:36:51.548522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.837 [2024-11-04 16:36:51.548540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.837 00:25:24.837 Latency(us) 00:25:24.837 [2024-11-04T15:36:51.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.837 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:24.837 nvme0n1 : 2.00 6312.45 789.06 0.00 0.00 2530.11 1911.47 7333.79 00:25:24.837 [2024-11-04T15:36:51.661Z] =================================================================================================================== 00:25:24.837 [2024-11-04T15:36:51.661Z] Total : 6312.45 789.06 0.00 0.00 2530.11 1911.47 7333.79 00:25:24.837 { 00:25:24.837 "results": [ 00:25:24.837 { 00:25:24.837 "job": "nvme0n1", 00:25:24.837 "core_mask": "0x2", 00:25:24.837 "workload": "randwrite", 00:25:24.837 "status": "finished", 00:25:24.837 "queue_depth": 16, 00:25:24.837 "io_size": 131072, 00:25:24.837 "runtime": 2.003975, 00:25:24.837 "iops": 6312.453997679611, 00:25:24.837 "mibps": 789.0567497099514, 00:25:24.837 "io_failed": 0, 00:25:24.837 "io_timeout": 0, 00:25:24.837 "avg_latency_us": 2530.1099160549597, 00:25:24.837 "min_latency_us": 1911.4666666666667, 00:25:24.837 "max_latency_us": 7333.7904761904765 00:25:24.837 } 00:25:24.837 ], 00:25:24.837 "core_count": 1 00:25:24.837 } 00:25:24.837 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:24.837 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:24.837 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:24.837 | .driver_specific 00:25:24.837 | .nvme_error 00:25:24.837 | .status_code 00:25:24.837 | .command_transient_transport_error' 00:25:24.837 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 408 > 0 )) 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2960252 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2960252 ']' 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2960252 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960252 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960252' 00:25:25.095 killing process with pid 2960252 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2960252 00:25:25.095 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.095 00:25:25.095 Latency(us) 00:25:25.095 [2024-11-04T15:36:51.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.095 [2024-11-04T15:36:51.919Z] =================================================================================================================== 00:25:25.095 [2024-11-04T15:36:51.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.095 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2960252 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2958428 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2958428 ']' 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2958428 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.354 16:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958428 00:25:25.354 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.354 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.354 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958428' 00:25:25.354 killing process with pid 2958428 00:25:25.354 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2958428 00:25:25.354 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2958428 00:25:25.613 00:25:25.613 real 0m13.773s 00:25:25.613 user 0m26.264s 00:25:25.613 sys 0m4.505s 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.613 ************************************ 00:25:25.613 END TEST nvmf_digest_error 00:25:25.613 ************************************ 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.613 rmmod nvme_tcp 00:25:25.613 rmmod nvme_fabrics 00:25:25.613 rmmod nvme_keyring 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2958428 ']' 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2958428 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2958428 ']' 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2958428 00:25:25.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2958428) - No such process 00:25:25.613 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2958428 is not found' 00:25:25.614 Process with pid 2958428 is not found 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.614 16:36:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.147 00:25:28.147 real 0m36.006s 00:25:28.147 user 0m54.554s 00:25:28.147 sys 0m13.515s 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 ************************************ 00:25:28.147 END TEST nvmf_digest 00:25:28.147 ************************************ 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 ************************************ 00:25:28.147 START TEST nvmf_bdevperf 00:25:28.147 ************************************ 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.147 * Looking for test storage... 00:25:28.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:28.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.147 --rc genhtml_branch_coverage=1 00:25:28.147 --rc genhtml_function_coverage=1 00:25:28.147 --rc genhtml_legend=1 00:25:28.147 --rc geninfo_all_blocks=1 00:25:28.147 --rc geninfo_unexecuted_blocks=1 00:25:28.147 00:25:28.147 ' 00:25:28.147 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:28.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.147 --rc genhtml_branch_coverage=1 00:25:28.147 --rc genhtml_function_coverage=1 00:25:28.147 --rc genhtml_legend=1 00:25:28.147 --rc geninfo_all_blocks=1 00:25:28.147 --rc geninfo_unexecuted_blocks=1 00:25:28.147 00:25:28.147 ' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.148 --rc genhtml_branch_coverage=1 00:25:28.148 --rc genhtml_function_coverage=1 00:25:28.148 --rc genhtml_legend=1 00:25:28.148 --rc geninfo_all_blocks=1 00:25:28.148 --rc geninfo_unexecuted_blocks=1 00:25:28.148 00:25:28.148 ' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:28.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.148 --rc genhtml_branch_coverage=1 00:25:28.148 --rc genhtml_function_coverage=1 00:25:28.148 --rc genhtml_legend=1 00:25:28.148 --rc geninfo_all_blocks=1 00:25:28.148 --rc geninfo_unexecuted_blocks=1 00:25:28.148 00:25:28.148 ' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.148 16:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.416 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:33.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:33.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:33.417 Found net devices under 0000:86:00.0: cvl_0_0 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:33.417 Found net devices under 0000:86:00.1: cvl_0_1 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.417 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:25:33.676 00:25:33.676 --- 10.0.0.2 ping statistics --- 00:25:33.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.676 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:33.676 00:25:33.676 --- 10.0.0.1 ping statistics --- 00:25:33.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.676 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2964261 00:25:33.676 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2964261 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2964261 ']' 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.677 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:33.677 [2024-11-04 16:37:00.489198] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:33.677 [2024-11-04 16:37:00.489249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.936 [2024-11-04 16:37:00.558721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.936 [2024-11-04 16:37:00.599839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.936 [2024-11-04 16:37:00.599876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.936 [2024-11-04 16:37:00.599884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.936 [2024-11-04 16:37:00.599890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.936 [2024-11-04 16:37:00.599895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.936 [2024-11-04 16:37:00.601299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.936 [2024-11-04 16:37:00.601362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.936 [2024-11-04 16:37:00.601363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:33.936 [2024-11-04 16:37:00.748850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.936 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.194 Malloc0 00:25:34.194 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.194 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.194 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.194 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.194 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.195 [2024-11-04 16:37:00.817251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:34.195 { 00:25:34.195 "params": { 00:25:34.195 "name": "Nvme$subsystem", 00:25:34.195 "trtype": "$TEST_TRANSPORT", 00:25:34.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.195 "adrfam": "ipv4", 00:25:34.195 "trsvcid": "$NVMF_PORT", 00:25:34.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.195 "hdgst": ${hdgst:-false}, 00:25:34.195 "ddgst": ${ddgst:-false} 00:25:34.195 }, 00:25:34.195 "method": "bdev_nvme_attach_controller" 00:25:34.195 } 00:25:34.195 EOF 00:25:34.195 )") 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:34.195 16:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:34.195 "params": { 00:25:34.195 "name": "Nvme1", 00:25:34.195 "trtype": "tcp", 00:25:34.195 "traddr": "10.0.0.2", 00:25:34.195 "adrfam": "ipv4", 00:25:34.195 "trsvcid": "4420", 00:25:34.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.195 "hdgst": false, 00:25:34.195 "ddgst": false 00:25:34.195 }, 00:25:34.195 "method": "bdev_nvme_attach_controller" 00:25:34.195 }' 00:25:34.195 [2024-11-04 16:37:00.868441] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:34.195 [2024-11-04 16:37:00.868483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964289 ] 00:25:34.195 [2024-11-04 16:37:00.931127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.195 [2024-11-04 16:37:00.971902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.453 Running I/O for 1 seconds... 00:25:35.387 11193.00 IOPS, 43.72 MiB/s 00:25:35.387 Latency(us) 00:25:35.387 [2024-11-04T15:37:02.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:35.387 Verification LBA range: start 0x0 length 0x4000 00:25:35.387 Nvme1n1 : 1.01 11201.61 43.76 0.00 0.00 11383.50 2512.21 12857.54 00:25:35.387 [2024-11-04T15:37:02.211Z] =================================================================================================================== 00:25:35.387 [2024-11-04T15:37:02.211Z] Total : 11201.61 43.76 0.00 0.00 11383.50 2512.21 12857.54 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2964516 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:35.646 { 00:25:35.646 "params": { 00:25:35.646 "name": "Nvme$subsystem", 00:25:35.646 "trtype": "$TEST_TRANSPORT", 00:25:35.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.646 "adrfam": "ipv4", 00:25:35.646 "trsvcid": "$NVMF_PORT", 00:25:35.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.646 "hdgst": ${hdgst:-false}, 00:25:35.646 "ddgst": ${ddgst:-false} 00:25:35.646 }, 00:25:35.646 "method": "bdev_nvme_attach_controller" 00:25:35.646 } 00:25:35.646 EOF 00:25:35.646 )") 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:35.646 16:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:35.646 "params": { 00:25:35.646 "name": "Nvme1", 00:25:35.646 "trtype": "tcp", 00:25:35.646 "traddr": "10.0.0.2", 00:25:35.646 "adrfam": "ipv4", 00:25:35.646 "trsvcid": "4420", 00:25:35.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.646 "hdgst": false, 00:25:35.646 "ddgst": false 00:25:35.646 }, 00:25:35.646 "method": "bdev_nvme_attach_controller" 00:25:35.646 }' 00:25:35.646 [2024-11-04 16:37:02.346896] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:35.646 [2024-11-04 16:37:02.346943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964516 ] 00:25:35.646 [2024-11-04 16:37:02.410665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.646 [2024-11-04 16:37:02.448587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.904 Running I/O for 15 seconds... 00:25:38.213 11438.00 IOPS, 44.68 MiB/s [2024-11-04T15:37:05.653Z] 11447.00 IOPS, 44.71 MiB/s [2024-11-04T15:37:05.653Z] 16:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2964261 00:25:38.829 16:37:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:38.829 [2024-11-04 16:37:05.320124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.829 [2024-11-04 16:37:05.320164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.829 [2024-11-04 16:37:05.320183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.829 [2024-11-04 16:37:05.320192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.830 [2024-11-04 16:37:05.320839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.830 [2024-11-04 16:37:05.320848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.320979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.320988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.831 [2024-11-04 16:37:05.321340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.831 [2024-11-04 16:37:05.321355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.831 [2024-11-04 16:37:05.321505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.831 [2024-11-04 16:37:05.321514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.321985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.321993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.832 [2024-11-04 16:37:05.322075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.832 [2024-11-04 16:37:05.322083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.833 [2024-11-04 16:37:05.322319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.322328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ebd00 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.322337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:38.833 [2024-11-04 16:37:05.322343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:38.833 [2024-11-04 16:37:05.322348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111456 len:8 PRP1 0x0 PRP2 0x0 00:25:38.833 [2024-11-04 16:37:05.322356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.833 [2024-11-04 16:37:05.325173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.833 [2024-11-04 16:37:05.325227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.833 [2024-11-04 16:37:05.325838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.833 [2024-11-04 16:37:05.325848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.326024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.326199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.833 [2024-11-04 16:37:05.326208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.833 [2024-11-04 16:37:05.326216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.833 [2024-11-04 16:37:05.326224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.833 [2024-11-04 16:37:05.338372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.833 [2024-11-04 16:37:05.338818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.833 [2024-11-04 16:37:05.338839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.833 [2024-11-04 16:37:05.338847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.339021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.339196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.833 [2024-11-04 16:37:05.339206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.833 [2024-11-04 16:37:05.339213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.833 [2024-11-04 16:37:05.339225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.833 [2024-11-04 16:37:05.351217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.833 [2024-11-04 16:37:05.351553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.833 [2024-11-04 16:37:05.351614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.833 [2024-11-04 16:37:05.351640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.352144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.352312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.833 [2024-11-04 16:37:05.352322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.833 [2024-11-04 16:37:05.352329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.833 [2024-11-04 16:37:05.352336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.833 [2024-11-04 16:37:05.363965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.833 [2024-11-04 16:37:05.364307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.833 [2024-11-04 16:37:05.364324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.833 [2024-11-04 16:37:05.364332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.364491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.364673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.833 [2024-11-04 16:37:05.364684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.833 [2024-11-04 16:37:05.364690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.833 [2024-11-04 16:37:05.364697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.833 [2024-11-04 16:37:05.376828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.833 [2024-11-04 16:37:05.377257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.833 [2024-11-04 16:37:05.377274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.833 [2024-11-04 16:37:05.377281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.833 [2024-11-04 16:37:05.377439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.833 [2024-11-04 16:37:05.377599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.377614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.377621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.377627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.389634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.390057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.390074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.390081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.390239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.390398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.390408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.390414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.390420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.402382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.402814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.402821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.402979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.403138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.403147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.403154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.403160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.415230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.415582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.415604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.415612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.415793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.415961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.415971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.415978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.415984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.427963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.428381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.428399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.428406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.428569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.428755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.428765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.428772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.428778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.440840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.441337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.441361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.441958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.442485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.442495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.442501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.442508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.453588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.453908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.453928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.453935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.454094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.454253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.454263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.454269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.454275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.466428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.466832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.466877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.466901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.467382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.467552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.834 [2024-11-04 16:37:05.467565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.834 [2024-11-04 16:37:05.467572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.834 [2024-11-04 16:37:05.467579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.834 [2024-11-04 16:37:05.479269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.834 [2024-11-04 16:37:05.479682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.834 [2024-11-04 16:37:05.479729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.834 [2024-11-04 16:37:05.479754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.834 [2024-11-04 16:37:05.480333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.834 [2024-11-04 16:37:05.480552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.480561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.480568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.480574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.492117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.492484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.492529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.492553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.493148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.493646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.493656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.493662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.493668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.504951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.505281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.505298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.505305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.505464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.505627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.505637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.505643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.505654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.517858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.518208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.518225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.518232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.518391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.518550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.518560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.518566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.518572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.530764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.531178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.531194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.531202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.531361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.531520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.531529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.531536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.531543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.543736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.544562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.544584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.544593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.544790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.544959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.544970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.544977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.544984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.556579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.556992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.557010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.557018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.557186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.557355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.557365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.557371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.557378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.569419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.569836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.569856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.569864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.570033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.570202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.570212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.570219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.570225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.582396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.582768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.582786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.582795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.582970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.583144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.583155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.583162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.835 [2024-11-04 16:37:05.583170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.835 [2024-11-04 16:37:05.595503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.835 [2024-11-04 16:37:05.595938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.835 [2024-11-04 16:37:05.595956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.835 [2024-11-04 16:37:05.595965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.835 [2024-11-04 16:37:05.596143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.835 [2024-11-04 16:37:05.596316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.835 [2024-11-04 16:37:05.596326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.835 [2024-11-04 16:37:05.596334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.836 [2024-11-04 16:37:05.596342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.836 [2024-11-04 16:37:05.608768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.836 [2024-11-04 16:37:05.609140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.836 [2024-11-04 16:37:05.609158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.836 [2024-11-04 16:37:05.609167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.836 [2024-11-04 16:37:05.609350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.836 [2024-11-04 16:37:05.609535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.836 [2024-11-04 16:37:05.609545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.836 [2024-11-04 16:37:05.609552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.836 [2024-11-04 16:37:05.609559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.836 [2024-11-04 16:37:05.622205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.836 [2024-11-04 16:37:05.622570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.836 [2024-11-04 16:37:05.622588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.836 [2024-11-04 16:37:05.622596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.836 [2024-11-04 16:37:05.622796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.836 [2024-11-04 16:37:05.622993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.836 [2024-11-04 16:37:05.623004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.836 [2024-11-04 16:37:05.623012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.836 [2024-11-04 16:37:05.623019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.836 [2024-11-04 16:37:05.635821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.836 [2024-11-04 16:37:05.636197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.836 [2024-11-04 16:37:05.636217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:38.836 [2024-11-04 16:37:05.636227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:38.836 [2024-11-04 16:37:05.636423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:38.836 [2024-11-04 16:37:05.636627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.836 [2024-11-04 16:37:05.636642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.836 [2024-11-04 16:37:05.636650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.836 [2024-11-04 16:37:05.636658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.649148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.649504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.649524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.649533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.649724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.649910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.649921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.649929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.649937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.662466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.662943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.662952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.663150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.663383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.663396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.663406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.663415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 10063.67 IOPS, 39.31 MiB/s [2024-11-04T15:37:05.921Z] [2024-11-04 16:37:05.675938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.676260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.676280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.676290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.676486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.676688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.676700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.676709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.676721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.689439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.689905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.689924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.689933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.690130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.690325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.690336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.690344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.690352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.703052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.703506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.703535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.703751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.703962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.703973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.703981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.703989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.716623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.717074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.717093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.717103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.717312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.717523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.717535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.717543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.717550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.730018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.730453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.730472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.730481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.730682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.730880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.730892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.730899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.730906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.097 [2024-11-04 16:37:05.743608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.097 [2024-11-04 16:37:05.744011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.097 [2024-11-04 16:37:05.744030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.097 [2024-11-04 16:37:05.744039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.097 [2024-11-04 16:37:05.744249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.097 [2024-11-04 16:37:05.744459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.097 [2024-11-04 16:37:05.744470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.097 [2024-11-04 16:37:05.744479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.097 [2024-11-04 16:37:05.744488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.757304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.757775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.757795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.757804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.758014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.758225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.758236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.758245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.758253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.770530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.770942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.770987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.771022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.771616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.772140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.772150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.772158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.772165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.783485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.783777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.783795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.783803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.783976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.784150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.784159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.784166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.784173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.796240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.796561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.796579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.796586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.796771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.796940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.796949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.796956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.796962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.809132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.809450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.809467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.809475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.809648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.809821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.809831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.809837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.809844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.821932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.822298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.822316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.822323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.822481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.822662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.822673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.822681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.822687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.835053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.835419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.835439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.835447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.835626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.835800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.835809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.835817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.835825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.848320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.848768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.848788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.848797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.848991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.849188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.849198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.849206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.849217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.098 [2024-11-04 16:37:05.862082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.098 [2024-11-04 16:37:05.862484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.098 [2024-11-04 16:37:05.862504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.098 [2024-11-04 16:37:05.862513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.098 [2024-11-04 16:37:05.862729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.098 [2024-11-04 16:37:05.862941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.098 [2024-11-04 16:37:05.862952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.098 [2024-11-04 16:37:05.862960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.098 [2024-11-04 16:37:05.862969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.099 [2024-11-04 16:37:05.875531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.099 [2024-11-04 16:37:05.876000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.099 [2024-11-04 16:37:05.876020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.099 [2024-11-04 16:37:05.876029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.099 [2024-11-04 16:37:05.876238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.099 [2024-11-04 16:37:05.876448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.099 [2024-11-04 16:37:05.876460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.099 [2024-11-04 16:37:05.876469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.099 [2024-11-04 16:37:05.876477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.099 [2024-11-04 16:37:05.888902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.099 [2024-11-04 16:37:05.889367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.099 [2024-11-04 16:37:05.889423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.099 [2024-11-04 16:37:05.889448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.099 [2024-11-04 16:37:05.890043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.099 [2024-11-04 16:37:05.890255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.099 [2024-11-04 16:37:05.890266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.099 [2024-11-04 16:37:05.890273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.099 [2024-11-04 16:37:05.890281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.099 [2024-11-04 16:37:05.901898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.099 [2024-11-04 16:37:05.902235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.099 [2024-11-04 16:37:05.902279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.099 [2024-11-04 16:37:05.902303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.099 [2024-11-04 16:37:05.902901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.099 [2024-11-04 16:37:05.903482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.099 [2024-11-04 16:37:05.903491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.099 [2024-11-04 16:37:05.903497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.099 [2024-11-04 16:37:05.903504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.099 [2024-11-04 16:37:05.914703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.099 [2024-11-04 16:37:05.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.099 [2024-11-04 16:37:05.915134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.099 [2024-11-04 16:37:05.915142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.099 [2024-11-04 16:37:05.915318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.099 [2024-11-04 16:37:05.915520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.099 [2024-11-04 16:37:05.915535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.099 [2024-11-04 16:37:05.915542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.099 [2024-11-04 16:37:05.915549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.359 [2024-11-04 16:37:05.927853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.359 [2024-11-04 16:37:05.928281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-11-04 16:37:05.928327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.359 [2024-11-04 16:37:05.928355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.359 [2024-11-04 16:37:05.928955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.359 [2024-11-04 16:37:05.929494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.359 [2024-11-04 16:37:05.929504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.359 [2024-11-04 16:37:05.929511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.359 [2024-11-04 16:37:05.929517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.359 [2024-11-04 16:37:05.940711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.359 [2024-11-04 16:37:05.941127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-11-04 16:37:05.941177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.359 [2024-11-04 16:37:05.941210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.359 [2024-11-04 16:37:05.941739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.359 [2024-11-04 16:37:05.941909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.359 [2024-11-04 16:37:05.941918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.359 [2024-11-04 16:37:05.941925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.359 [2024-11-04 16:37:05.941931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.359 [2024-11-04 16:37:05.953517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.359 [2024-11-04 16:37:05.953867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-11-04 16:37:05.953885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.359 [2024-11-04 16:37:05.953892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.359 [2024-11-04 16:37:05.954051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.359 [2024-11-04 16:37:05.954211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.359 [2024-11-04 16:37:05.954220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.359 [2024-11-04 16:37:05.954227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.359 [2024-11-04 16:37:05.954233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.359 [2024-11-04 16:37:05.966362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.359 [2024-11-04 16:37:05.966795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-11-04 16:37:05.966842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.359 [2024-11-04 16:37:05.966867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.359 [2024-11-04 16:37:05.967370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.359 [2024-11-04 16:37:05.967531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.359 [2024-11-04 16:37:05.967540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.359 [2024-11-04 16:37:05.967547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.359 [2024-11-04 16:37:05.967553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.359 [2024-11-04 16:37:05.979077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.359 [2024-11-04 16:37:05.979504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-11-04 16:37:05.979550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.359 [2024-11-04 16:37:05.979574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.359 [2024-11-04 16:37:05.980167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.359 [2024-11-04 16:37:05.980676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.359 [2024-11-04 16:37:05.980687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.359 [2024-11-04 16:37:05.980694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.359 [2024-11-04 16:37:05.980700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:05.991840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:05.992273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:05.992319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:05.992343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:05.992862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:05.993033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:05.993042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:05.993048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:05.993055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.004635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.005044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.005061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.005068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.005227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.005386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.005395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.005401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.005407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.017385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.017821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.017866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.017891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.018440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.018604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.018613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.018620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.018630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.030141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.030494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.030540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.030564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.031157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.031748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.031787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.031794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.031802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.042974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.043398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.043416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.043424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.043591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.043765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.043776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.043782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.043789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.055733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.056150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.056200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.056224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.056771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.056940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.056950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.056957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.056963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.068574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.068996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.069013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.069021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.069179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.069338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.069348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.069355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.069361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.081318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.081781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.081828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.081852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.082433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.082911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.082922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.082929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.082936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.360 [2024-11-04 16:37:06.094379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.360 [2024-11-04 16:37:06.094755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-11-04 16:37:06.094802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.360 [2024-11-04 16:37:06.094829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.360 [2024-11-04 16:37:06.095410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.360 [2024-11-04 16:37:06.095977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.360 [2024-11-04 16:37:06.095988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.360 [2024-11-04 16:37:06.095996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.360 [2024-11-04 16:37:06.096004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.107258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.107658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.107687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.107846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.108005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.108014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.108021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.108026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.120268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.120722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.120739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.120747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.120919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.121092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.121102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.121109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.121116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.133164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.133616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.133661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.133684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.134072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.134232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.134242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.134249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.134255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.145925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.146340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.146364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.146523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.146705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.146721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.146728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.146734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.158695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.159048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.159095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.159119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.159712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.160182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.160191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.160198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.160205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.361 [2024-11-04 16:37:06.171462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.361 [2024-11-04 16:37:06.171818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-11-04 16:37:06.171836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.361 [2024-11-04 16:37:06.171845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.361 [2024-11-04 16:37:06.172012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.361 [2024-11-04 16:37:06.172180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.361 [2024-11-04 16:37:06.172190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.361 [2024-11-04 16:37:06.172196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.361 [2024-11-04 16:37:06.172203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.184538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.184979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.184997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.185005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.185165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.185325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.185335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.185342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.185352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.197487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.197929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.197979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.198007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.198366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.198527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.198537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.198545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.198551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.210416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.210824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.210866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.210893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.211409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.211569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.211578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.211585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.211591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.223155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.223574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.223591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.223599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.223788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.223957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.223966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.223973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.223979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.235977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.236395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.236441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.236465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.237061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.237435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.237444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.237450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.237457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.621 [2024-11-04 16:37:06.248803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.621 [2024-11-04 16:37:06.249219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.621 [2024-11-04 16:37:06.249270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.621 [2024-11-04 16:37:06.249293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.621 [2024-11-04 16:37:06.249887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.621 [2024-11-04 16:37:06.250148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.621 [2024-11-04 16:37:06.250154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.621 [2024-11-04 16:37:06.250162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.621 [2024-11-04 16:37:06.250168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.261839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.262246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.262264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.262272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.262441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.262614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.262641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.262648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.262655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.274574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.274950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.274967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.274978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.275137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.275297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.275306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.275312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.275319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.287349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.287678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.287696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.287704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.287873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.288041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.288051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.288058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.288064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.300139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.300545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.300569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.300735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.300896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.300905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.300911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.300917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.312906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.313320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.313337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.313345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.313503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.313686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.313699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.313707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.313713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.325742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.326158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.326175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.326182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.326341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.326500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.326508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.326515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.326520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.338464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.338804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.338822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.338830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.338991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.339151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.339161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.339169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.339175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.351336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.351748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.351796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.351821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.352347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.352516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.352524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.352531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.352540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.364068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.622 [2024-11-04 16:37:06.364475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.622 [2024-11-04 16:37:06.364513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.622 [2024-11-04 16:37:06.364540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.622 [2024-11-04 16:37:06.365129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.622 [2024-11-04 16:37:06.365445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.622 [2024-11-04 16:37:06.365454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.622 [2024-11-04 16:37:06.365461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.622 [2024-11-04 16:37:06.365467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.622 [2024-11-04 16:37:06.376968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.377378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.377418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.377444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.378039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.378591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.378605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.378612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.378620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.623 [2024-11-04 16:37:06.389797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.390206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.390252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.390276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.390737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.390907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.390915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.390922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.390927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.623 [2024-11-04 16:37:06.402597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.403017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.403035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.403042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.403201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.403361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.403370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.403377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.403383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.623 [2024-11-04 16:37:06.415446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.415802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.415819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.415826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.415985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.416144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.416154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.416160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.416166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.623 [2024-11-04 16:37:06.428222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.428566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.428590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.428778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.428946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.428956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.428962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.428969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.623 [2024-11-04 16:37:06.441186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.623 [2024-11-04 16:37:06.441637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.623 [2024-11-04 16:37:06.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.623 [2024-11-04 16:37:06.441667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.623 [2024-11-04 16:37:06.441836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.623 [2024-11-04 16:37:06.442004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.623 [2024-11-04 16:37:06.442014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.623 [2024-11-04 16:37:06.442021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.623 [2024-11-04 16:37:06.442028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.454056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.454492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.454511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.454520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.454695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.454863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.454874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.454881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.454887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.466811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.467148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.467165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.467174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.467333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.467492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.467502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.467508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.467514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.479580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.479996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.480013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.480021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.480180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.480339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.480352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.480359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.480365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.492417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.492831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.492850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.492857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.493025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.493194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.493205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.493213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.493219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.505237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.505654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.505699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.505724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.506112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.506273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.506282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.506289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.506295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.518111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.518534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.883 [2024-11-04 16:37:06.518579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.883 [2024-11-04 16:37:06.518616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.883 [2024-11-04 16:37:06.519099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.883 [2024-11-04 16:37:06.519259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.883 [2024-11-04 16:37:06.519269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.883 [2024-11-04 16:37:06.519275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.883 [2024-11-04 16:37:06.519285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.883 [2024-11-04 16:37:06.530966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.883 [2024-11-04 16:37:06.531308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.531325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.531332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.531490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.531656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.531666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.531672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.531679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.543690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.544147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.544192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.544216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.544706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.544868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.544877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.544883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.544889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.556617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.557005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.557022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.557029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.557187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.557347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.557356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.557362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.557368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.569442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.569866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.569914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.569940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.570519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.571065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.571074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.571081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.571087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.582221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.582636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.582662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.582821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.582980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.582990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.582996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.583003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.594970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.595404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.595421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.595428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.595586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.595774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.595785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.595792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.595798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.608012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.608443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.608461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.608470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.608653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.608839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.608848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.608855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.608862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.620965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.621387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.621432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.621456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.621974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.622135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.622144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.622150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.622157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.633832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.634275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.884 [2024-11-04 16:37:06.634319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.884 [2024-11-04 16:37:06.634343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.884 [2024-11-04 16:37:06.634912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.884 [2024-11-04 16:37:06.635073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.884 [2024-11-04 16:37:06.635083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.884 [2024-11-04 16:37:06.635090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.884 [2024-11-04 16:37:06.635096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.884 [2024-11-04 16:37:06.646572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.884 [2024-11-04 16:37:06.646922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.885 [2024-11-04 16:37:06.646939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.885 [2024-11-04 16:37:06.646946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.885 [2024-11-04 16:37:06.647104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.885 [2024-11-04 16:37:06.647264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.885 [2024-11-04 16:37:06.647277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.885 [2024-11-04 16:37:06.647283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.885 [2024-11-04 16:37:06.647289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.885 [2024-11-04 16:37:06.659402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.885 [2024-11-04 16:37:06.659819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.885 [2024-11-04 16:37:06.659837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.885 [2024-11-04 16:37:06.659845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.885 [2024-11-04 16:37:06.660015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.885 [2024-11-04 16:37:06.660175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.885 [2024-11-04 16:37:06.660184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.885 [2024-11-04 16:37:06.660191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.885 [2024-11-04 16:37:06.660197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.885 7547.75 IOPS, 29.48 MiB/s [2024-11-04T15:37:06.709Z] [2024-11-04 16:37:06.672155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.885 [2024-11-04 16:37:06.672569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.885 [2024-11-04 16:37:06.672586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.885 [2024-11-04 16:37:06.672593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.885 [2024-11-04 16:37:06.672781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.885 [2024-11-04 16:37:06.672951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.885 [2024-11-04 16:37:06.672960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.885 [2024-11-04 16:37:06.672967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.885 [2024-11-04 16:37:06.672974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.885 [2024-11-04 16:37:06.685010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.885 [2024-11-04 16:37:06.685424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.885 [2024-11-04 16:37:06.685440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.885 [2024-11-04 16:37:06.685448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.885 [2024-11-04 16:37:06.685611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.885 [2024-11-04 16:37:06.685795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.885 [2024-11-04 16:37:06.685804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.885 [2024-11-04 16:37:06.685815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.885 [2024-11-04 16:37:06.685822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.885 [2024-11-04 16:37:06.697891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.885 [2024-11-04 16:37:06.698233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.885 [2024-11-04 16:37:06.698250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:39.885 [2024-11-04 16:37:06.698258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:39.885 [2024-11-04 16:37:06.698425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:39.885 [2024-11-04 16:37:06.698593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.885 [2024-11-04 16:37:06.698610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.885 [2024-11-04 16:37:06.698617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.885 [2024-11-04 16:37:06.698624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.710884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.711322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.711341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.711350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.711522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.711703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.711714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.711722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.711729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.723783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.724225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.724244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.724252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.724420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.724588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.724598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.724613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.724619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.736654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.737049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.737065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.737073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.737231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.737390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.737399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.737405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.737412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.749456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.749893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.749910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.749918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.750076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.750236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.750246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.750252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.750259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.762312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.762721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.762739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.762747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.762907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.763067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.763077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.763083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.763090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.775191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.775619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.775665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.775704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.776285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.776739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.776750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.776756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.776763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.787901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.788319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.788336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.788344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.788503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.788686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.788697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.788704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.788711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.800691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.145 [2024-11-04 16:37:06.801039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.145 [2024-11-04 16:37:06.801056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.145 [2024-11-04 16:37:06.801063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.145 [2024-11-04 16:37:06.801222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.145 [2024-11-04 16:37:06.801381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.145 [2024-11-04 16:37:06.801391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.145 [2024-11-04 16:37:06.801398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.145 [2024-11-04 16:37:06.801404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.145 [2024-11-04 16:37:06.813427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.813818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.813836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.813844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.814002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.814166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.814176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.814182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.814189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.826243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.826659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.826677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.826684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.826843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.827002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.827011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.827017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.827023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.839053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.839505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.839551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.839575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.840170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.840747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.840757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.840764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.840770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.851966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.852359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.852376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.852383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.852542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.852714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.852724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.852731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.852741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.864925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.865371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.865418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.865444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.865859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.866030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.866041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.866049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.866055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.877792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.878119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.878137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.878145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.878314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.878482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.878491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.878498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.878504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.890879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.891304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.891321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.891329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.891502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.891678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.891688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.891696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.891703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.903825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.904262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.904280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.904287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.904460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.904639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.904650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.904657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.904664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.916833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.146 [2024-11-04 16:37:06.917290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.146 [2024-11-04 16:37:06.917308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.146 [2024-11-04 16:37:06.917317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.146 [2024-11-04 16:37:06.917489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.146 [2024-11-04 16:37:06.917668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.146 [2024-11-04 16:37:06.917679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.146 [2024-11-04 16:37:06.917686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.146 [2024-11-04 16:37:06.917692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.146 [2024-11-04 16:37:06.929822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.147 [2024-11-04 16:37:06.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.147 [2024-11-04 16:37:06.930265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.147 [2024-11-04 16:37:06.930273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.147 [2024-11-04 16:37:06.930445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.147 [2024-11-04 16:37:06.930624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.147 [2024-11-04 16:37:06.930635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.147 [2024-11-04 16:37:06.930641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.147 [2024-11-04 16:37:06.930649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.147 [2024-11-04 16:37:06.942778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.147 [2024-11-04 16:37:06.943198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.147 [2024-11-04 16:37:06.943216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.147 [2024-11-04 16:37:06.943228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.147 [2024-11-04 16:37:06.943401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.147 [2024-11-04 16:37:06.943573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.147 [2024-11-04 16:37:06.943583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.147 [2024-11-04 16:37:06.943590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.147 [2024-11-04 16:37:06.943596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.147 [2024-11-04 16:37:06.955749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.147 [2024-11-04 16:37:06.956160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.147 [2024-11-04 16:37:06.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.147 [2024-11-04 16:37:06.956186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.147 [2024-11-04 16:37:06.956359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.147 [2024-11-04 16:37:06.956532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.147 [2024-11-04 16:37:06.956542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.147 [2024-11-04 16:37:06.956549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.147 [2024-11-04 16:37:06.956555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:06.969012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:06.969447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:06.969466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:06.969475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:06.969656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:06.969830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:06.969841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.407 [2024-11-04 16:37:06.969848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.407 [2024-11-04 16:37:06.969855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:06.981961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:06.982343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:06.982361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:06.982370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:06.982539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:06.982718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:06.982730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.407 [2024-11-04 16:37:06.982737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.407 [2024-11-04 16:37:06.982743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:06.994947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:06.995404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:06.995422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:06.995430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:06.995609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:06.995782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:06.995792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.407 [2024-11-04 16:37:06.995799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.407 [2024-11-04 16:37:06.995806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:07.007891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:07.008301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:07.008347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:07.008371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:07.008964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:07.009126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:07.009136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.407 [2024-11-04 16:37:07.009143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.407 [2024-11-04 16:37:07.009149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:07.020742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:07.021088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:07.021105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:07.021113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:07.021271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:07.021431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:07.021440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.407 [2024-11-04 16:37:07.021447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.407 [2024-11-04 16:37:07.021456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.407 [2024-11-04 16:37:07.033611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.407 [2024-11-04 16:37:07.033914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-04 16:37:07.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.407 [2024-11-04 16:37:07.033940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.407 [2024-11-04 16:37:07.034108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.407 [2024-11-04 16:37:07.034276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.407 [2024-11-04 16:37:07.034286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.034293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.034299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.046476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.046891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.046908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.046915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.047074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.047233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.047242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.047249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.047255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.059323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.059688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.059708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.059716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.059876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.060036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.060047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.060053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.060060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.072203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.072634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.072677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.072704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.073233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.073394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.073403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.073409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.073415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.084998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.085356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.085374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.085382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.085978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.086149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.086159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.086166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.086172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.097903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.098298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.098315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.098323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.098481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.098663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.098673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.098680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.098688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.110737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.111061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.111078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.111090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.111251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.111413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.111424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.111431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.111438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.123792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.124245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.124254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.124426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.124607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.124618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.124627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.124636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.136715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.137105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.137123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.408 [2024-11-04 16:37:07.137130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.408 [2024-11-04 16:37:07.137289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.408 [2024-11-04 16:37:07.137448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.408 [2024-11-04 16:37:07.137458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.408 [2024-11-04 16:37:07.137464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.408 [2024-11-04 16:37:07.137470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.408 [2024-11-04 16:37:07.149629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.408 [2024-11-04 16:37:07.149923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-04 16:37:07.149968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.149991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.150527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.150717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.150727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.150734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.150741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.162452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.162828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.162846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.162854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.163021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.163188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.163198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.163205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.163212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.175230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.175688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.175734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.175757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.176299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.176467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.176477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.176484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.176490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.188109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.188526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.188543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.188550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.188736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.188905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.188914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.188921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.188930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.200979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.201450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.201467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.201475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.201655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.201824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.201834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.201841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.201847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.213812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.214097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.214115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.214123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.214289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.214458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.214469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.214475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.214483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.409 [2024-11-04 16:37:07.226774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.409 [2024-11-04 16:37:07.227137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-04 16:37:07.227155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.409 [2024-11-04 16:37:07.227164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.409 [2024-11-04 16:37:07.227346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.409 [2024-11-04 16:37:07.227527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.409 [2024-11-04 16:37:07.227539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.409 [2024-11-04 16:37:07.227546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.409 [2024-11-04 16:37:07.227554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.669 [2024-11-04 16:37:07.239852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.669 [2024-11-04 16:37:07.240225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-11-04 16:37:07.240244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.669 [2024-11-04 16:37:07.240252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.669 [2024-11-04 16:37:07.240426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.669 [2024-11-04 16:37:07.240606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.240617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.240624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.240631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.252926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.253407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.253426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.253435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.253622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.253818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.253828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.253835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.253842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.266161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.266588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.266612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.266621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.266804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.266990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.267001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.267008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.267015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.279433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.279860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.279879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.279890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.280074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.280259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.280269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.280277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.280284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.292583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.292918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.292936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.292944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.293116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.293313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.293323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.293330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.293337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.305585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.305948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.305966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.305973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.306146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.306319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.306329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.306336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.306343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.318647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.319078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.319096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.319104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.319277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.319454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.319463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.319470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.319476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.331619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.332047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.332090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.332114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.332709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.333153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.333163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.333170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.333176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.344394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.344704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.344722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.344730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.670 [2024-11-04 16:37:07.344898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.670 [2024-11-04 16:37:07.345066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.670 [2024-11-04 16:37:07.345075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.670 [2024-11-04 16:37:07.345082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.670 [2024-11-04 16:37:07.345088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.670 [2024-11-04 16:37:07.357311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.670 [2024-11-04 16:37:07.357738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-11-04 16:37:07.357787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.670 [2024-11-04 16:37:07.357812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.358195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.358356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.358365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.358371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.358381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.370063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.370492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.370510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.370518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.370701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.370871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.370883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.370890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.370897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.383022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.383458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.383476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.383485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.383664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.383839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.383859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.383868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.383876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.395902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.396332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.396402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.396859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.397030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.397040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.397046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.397053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.408629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.408982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.408998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.409005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.409164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.409323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.409332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.409339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.409345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.421455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.421891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.421899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.422057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.422217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.422226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.422232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.422239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.434209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.434569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.434587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.434595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.434769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.434937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.434947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.434954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.434961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.447073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.447496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.447514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.447525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.447708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.447877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.447886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.447893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.447899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.459932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.460356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.460401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.671 [2024-11-04 16:37:07.460424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.671 [2024-11-04 16:37:07.460984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.671 [2024-11-04 16:37:07.461155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.671 [2024-11-04 16:37:07.461164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.671 [2024-11-04 16:37:07.461171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.671 [2024-11-04 16:37:07.461177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.671 [2024-11-04 16:37:07.472663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.671 [2024-11-04 16:37:07.473023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-11-04 16:37:07.473068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.672 [2024-11-04 16:37:07.473091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.672 [2024-11-04 16:37:07.473684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.672 [2024-11-04 16:37:07.474269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.672 [2024-11-04 16:37:07.474306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.672 [2024-11-04 16:37:07.474313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.672 [2024-11-04 16:37:07.474320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.672 [2024-11-04 16:37:07.485575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.672 [2024-11-04 16:37:07.485997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-11-04 16:37:07.486014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.672 [2024-11-04 16:37:07.486021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.672 [2024-11-04 16:37:07.486180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.672 [2024-11-04 16:37:07.486339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.672 [2024-11-04 16:37:07.486351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.672 [2024-11-04 16:37:07.486358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.672 [2024-11-04 16:37:07.486365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.931 [2024-11-04 16:37:07.498496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.931 [2024-11-04 16:37:07.498943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.931 [2024-11-04 16:37:07.498968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.931 [2024-11-04 16:37:07.498978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.931 [2024-11-04 16:37:07.499162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.931 [2024-11-04 16:37:07.499341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.931 [2024-11-04 16:37:07.499351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.931 [2024-11-04 16:37:07.499359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.931 [2024-11-04 16:37:07.499366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.931 [2024-11-04 16:37:07.511326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.931 [2024-11-04 16:37:07.511710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.931 [2024-11-04 16:37:07.511728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.931 [2024-11-04 16:37:07.511736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.931 [2024-11-04 16:37:07.511896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.931 [2024-11-04 16:37:07.512056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.931 [2024-11-04 16:37:07.512065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.931 [2024-11-04 16:37:07.512072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.931 [2024-11-04 16:37:07.512078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.931 [2024-11-04 16:37:07.524317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.931 [2024-11-04 16:37:07.524717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.931 [2024-11-04 16:37:07.524736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.931 [2024-11-04 16:37:07.524744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.931 [2024-11-04 16:37:07.524913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.525072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.525082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.525089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.525099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.537168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.537497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.537541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.537565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.538158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.538389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.538399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.538406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.538413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.550357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.550693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.550711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.550720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.550888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.551057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.551067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.551074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.551081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.563116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.563545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.563596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.563639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.564212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.564382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.564392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.564399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.564405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.575972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.576294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.576338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.576362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.576954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.577538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.577570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.577577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.577583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.588696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.589103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.589148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.589172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.589673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.589953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.589966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.589978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.589989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.602187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.602629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.602647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.602656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.602839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.603024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.603034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.603041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.603048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.614990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.615408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.615425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.615435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.615595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.932 [2024-11-04 16:37:07.615783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.932 [2024-11-04 16:37:07.615793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.932 [2024-11-04 16:37:07.615800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.932 [2024-11-04 16:37:07.615807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.932 [2024-11-04 16:37:07.627723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.932 [2024-11-04 16:37:07.628086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.932 [2024-11-04 16:37:07.628104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.932 [2024-11-04 16:37:07.628113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.932 [2024-11-04 16:37:07.628274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.628434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.628445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.628451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.628458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.640815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.641174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.641193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.641202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.641374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.641547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.641557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.641564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.641570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.653742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.654072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.654089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.654097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.654255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.654414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.654426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.654433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.654439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.666519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.666861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.666878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.666885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.667044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.667202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.667212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.667218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.667224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 6038.20 IOPS, 23.59 MiB/s [2024-11-04T15:37:07.757Z] [2024-11-04 16:37:07.679270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.679710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.679718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.679877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.680037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.680046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.680052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.680059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.692123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.692520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.692537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.692544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.692728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.692897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.692907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.692917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.692924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.704954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.705368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.705385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.705392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.705550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.705736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.705746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.705752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.705758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.717697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.718108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.718176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.718737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.718907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.718917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.718923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.718929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.730475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.730876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.730894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.730902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.731060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.933 [2024-11-04 16:37:07.731220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.933 [2024-11-04 16:37:07.731229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.933 [2024-11-04 16:37:07.731236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.933 [2024-11-04 16:37:07.731242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.933 [2024-11-04 16:37:07.743311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.933 [2024-11-04 16:37:07.743731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.933 [2024-11-04 16:37:07.743748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:40.933 [2024-11-04 16:37:07.743757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:40.933 [2024-11-04 16:37:07.743915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:40.934 [2024-11-04 16:37:07.744076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.934 [2024-11-04 16:37:07.744085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.934 [2024-11-04 16:37:07.744091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.934 [2024-11-04 16:37:07.744097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.193 [2024-11-04 16:37:07.756493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.193 [2024-11-04 16:37:07.756892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.193 [2024-11-04 16:37:07.756911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.193 [2024-11-04 16:37:07.756919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.193 [2024-11-04 16:37:07.757078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.193 [2024-11-04 16:37:07.757238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.193 [2024-11-04 16:37:07.757248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.193 [2024-11-04 16:37:07.757254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.193 [2024-11-04 16:37:07.757261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.193 [2024-11-04 16:37:07.769253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.193 [2024-11-04 16:37:07.769671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.193 [2024-11-04 16:37:07.769689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.193 [2024-11-04 16:37:07.769697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.193 [2024-11-04 16:37:07.769857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.193 [2024-11-04 16:37:07.770015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.193 [2024-11-04 16:37:07.770025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.193 [2024-11-04 16:37:07.770031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.193 [2024-11-04 16:37:07.770038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.193 [2024-11-04 16:37:07.782009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.193 [2024-11-04 16:37:07.782420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.782437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.782448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.782613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.782795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.782805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.782811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.782818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.794845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.795239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.795255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.795263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.795420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.795579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.795588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.795595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.795607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.807624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.808039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.808088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.808112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.808705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.808962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.808971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.808978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.808985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.820409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.820825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.820844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.820852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.821011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.821174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.821184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.821190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.821196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.833114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.833564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.833590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.834150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.834320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.834330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.834337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.834343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.845959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.846376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.846392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.846400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.846557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.846742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.846752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.846759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.846766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.858823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.859109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.859126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.859134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.859293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.859452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.859461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.859470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.859478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.871674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.872032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.872049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.872057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.872225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.872393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.872402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.872409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.872416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.884429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.884704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.884721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.194 [2024-11-04 16:37:07.884729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.194 [2024-11-04 16:37:07.884888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.194 [2024-11-04 16:37:07.885047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.194 [2024-11-04 16:37:07.885058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.194 [2024-11-04 16:37:07.885065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.194 [2024-11-04 16:37:07.885072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.194 [2024-11-04 16:37:07.897458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.194 [2024-11-04 16:37:07.897832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.194 [2024-11-04 16:37:07.897850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.897859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.898034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.898206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.898216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.898223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.898229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.910355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.910715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.910733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.910741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.910915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.911074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.911084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.911090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.911097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.923107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.923544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.923561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.923568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.923742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.923910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.923920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.923927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.923934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.935911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.936310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.936327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.936334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.936493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.936674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.936684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.936691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.936697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.948702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.949092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.949109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.949119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.949289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.949447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.949457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.949463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.949470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.961435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.961846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.961863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.961872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.962039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.962207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.962216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.962223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.962230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.974299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.974741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.974759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.974767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.974941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.195 [2024-11-04 16:37:07.975101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.195 [2024-11-04 16:37:07.975110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.195 [2024-11-04 16:37:07.975116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.195 [2024-11-04 16:37:07.975123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.195 [2024-11-04 16:37:07.987052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.195 [2024-11-04 16:37:07.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.195 [2024-11-04 16:37:07.987408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.195 [2024-11-04 16:37:07.987416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.195 [2024-11-04 16:37:07.987575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.196 [2024-11-04 16:37:07.987765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.196 [2024-11-04 16:37:07.987775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.196 [2024-11-04 16:37:07.987782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.196 [2024-11-04 16:37:07.987789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.196 [2024-11-04 16:37:07.999884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.196 [2024-11-04 16:37:08.000299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.196 [2024-11-04 16:37:08.000316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.196 [2024-11-04 16:37:08.000324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.196 [2024-11-04 16:37:08.000482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.196 [2024-11-04 16:37:08.000646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.196 [2024-11-04 16:37:08.000656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.196 [2024-11-04 16:37:08.000662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.196 [2024-11-04 16:37:08.000669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.196 [2024-11-04 16:37:08.012783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.196 [2024-11-04 16:37:08.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.196 [2024-11-04 16:37:08.013176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.196 [2024-11-04 16:37:08.013185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.196 [2024-11-04 16:37:08.013369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.196 [2024-11-04 16:37:08.013547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.196 [2024-11-04 16:37:08.013557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.196 [2024-11-04 16:37:08.013565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.196 [2024-11-04 16:37:08.013572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.455 [2024-11-04 16:37:08.025776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.455 [2024-11-04 16:37:08.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.455 [2024-11-04 16:37:08.026266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.455 [2024-11-04 16:37:08.026293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.455 [2024-11-04 16:37:08.026712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.455 [2024-11-04 16:37:08.026884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.455 [2024-11-04 16:37:08.026893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.455 [2024-11-04 16:37:08.026906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.455 [2024-11-04 16:37:08.026914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.455 [2024-11-04 16:37:08.038629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.455 [2024-11-04 16:37:08.039030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.455 [2024-11-04 16:37:08.039047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.455 [2024-11-04 16:37:08.039055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.455 [2024-11-04 16:37:08.039213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.455 [2024-11-04 16:37:08.039372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.455 [2024-11-04 16:37:08.039381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.455 [2024-11-04 16:37:08.039388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.455 [2024-11-04 16:37:08.039394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.455 [2024-11-04 16:37:08.051400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.455 [2024-11-04 16:37:08.051755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.455 [2024-11-04 16:37:08.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.455 [2024-11-04 16:37:08.051781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.455 [2024-11-04 16:37:08.051940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.455 [2024-11-04 16:37:08.052100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.455 [2024-11-04 16:37:08.052110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.455 [2024-11-04 16:37:08.052116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.455 [2024-11-04 16:37:08.052122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.455 [2024-11-04 16:37:08.064217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.455 [2024-11-04 16:37:08.064645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.455 [2024-11-04 16:37:08.064663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.455 [2024-11-04 16:37:08.064671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.064831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.064990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.065000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.065006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.065013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.076931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.077355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.077372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.077380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.077539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.077725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.077735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.077742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.077748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.089783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.090180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.090197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.090204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.090362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.090520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.090529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.090536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.090542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.102597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.102994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.103011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.103018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.103176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.103334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.103343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.103349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.103356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.115414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.115769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.115787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.115798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.115957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.116117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.116126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.116132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.116139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.128224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.128657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.128702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.128726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.129305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.129712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.129722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.129729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.129735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.141068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.141464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.141534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.141979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.142150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.142159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.142166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.142172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.154035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.154394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.154435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.154463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.155038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.155211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.155221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.155228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.155235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.166865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.456 [2024-11-04 16:37:08.167282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.456 [2024-11-04 16:37:08.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.456 [2024-11-04 16:37:08.167307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.456 [2024-11-04 16:37:08.167465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.456 [2024-11-04 16:37:08.167645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.456 [2024-11-04 16:37:08.167656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.456 [2024-11-04 16:37:08.167663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.456 [2024-11-04 16:37:08.167669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.456 [2024-11-04 16:37:08.179659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.180079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.180096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.180104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.180262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.180421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.180430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.180436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.180442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.192514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.192917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.192934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.192941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.193100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.193258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.193267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.193273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.193282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.205360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.205779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.205796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.205804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.205962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.206121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.206131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.206137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.206143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.218216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.218639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.218685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.218709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.219174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.219335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.219344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.219351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.219356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.230943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.231252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.231269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.231276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.231434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.231594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.231609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.231616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.231623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.243693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.244110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.244127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.244134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.244293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.244454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.244464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.244470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.244476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.256441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.256807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.256825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.256833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.257000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.257168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.257178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.257185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.257192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.457 [2024-11-04 16:37:08.269224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.457 [2024-11-04 16:37:08.269637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.457 [2024-11-04 16:37:08.269654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.457 [2024-11-04 16:37:08.269662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.457 [2024-11-04 16:37:08.269822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.457 [2024-11-04 16:37:08.269982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.457 [2024-11-04 16:37:08.269991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.457 [2024-11-04 16:37:08.269998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.457 [2024-11-04 16:37:08.270004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.718 [2024-11-04 16:37:08.282107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.718 [2024-11-04 16:37:08.282535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.718 [2024-11-04 16:37:08.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.718 [2024-11-04 16:37:08.282642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.718 [2024-11-04 16:37:08.283130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.718 [2024-11-04 16:37:08.283316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.718 [2024-11-04 16:37:08.283326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.283333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.283340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.294878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.295245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.295262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.295271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.295431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.295590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.295607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.295615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.295623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.307841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2964261 Killed "${NVMF_APP[@]}" "$@" 00:25:41.719 [2024-11-04 16:37:08.308188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.308207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.308215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.308387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.308560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.308571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.308577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.308584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2965454 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2965454 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2965454 ']' 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.719 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.719 [2024-11-04 16:37:08.320909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.321340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.321367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.321375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.321549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.321729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.321743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.321753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.321760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.333918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.334330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.334355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.334528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.334707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.334717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.334724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.334730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.346885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.347315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.347334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.347343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.347519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.347698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.347709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.347716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.347722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.359873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.360204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.360221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.360229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.360402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.360575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.360584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.360591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.360597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.719 [2024-11-04 16:37:08.364385] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:41.719 [2024-11-04 16:37:08.364434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.719 [2024-11-04 16:37:08.373048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.719 [2024-11-04 16:37:08.373433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.719 [2024-11-04 16:37:08.373451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.719 [2024-11-04 16:37:08.373459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.719 [2024-11-04 16:37:08.373650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.719 [2024-11-04 16:37:08.373836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.719 [2024-11-04 16:37:08.373847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.719 [2024-11-04 16:37:08.373855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.719 [2024-11-04 16:37:08.373863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.386212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.386651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.386670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.386678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.386856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.387030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.387041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.387048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.387055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.399271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.399637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.399656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.399665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.399867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.400044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.400055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.400062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.400069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.412422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.412765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.412784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.412792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.412965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.413140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.413150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.413157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.413163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.425455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.425828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.425846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.425855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.426027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.426200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.426215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.426222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.426229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.433652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:41.720 [2024-11-04 16:37:08.438523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.438814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.438833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.438842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.439014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.439188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.439198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.439205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.439212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.451486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.451832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.451851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.451859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.452027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.452196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.452206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.452213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.452219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.464522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.464893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.464901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.465069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.465237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.465249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.465256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.465266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.720 [2024-11-04 16:37:08.474089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.720 [2024-11-04 16:37:08.474118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.720 [2024-11-04 16:37:08.474125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.720 [2024-11-04 16:37:08.474131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.720 [2024-11-04 16:37:08.474137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.720 [2024-11-04 16:37:08.475501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.720 [2024-11-04 16:37:08.475589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.720 [2024-11-04 16:37:08.475590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.720 [2024-11-04 16:37:08.477464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.720 [2024-11-04 16:37:08.477815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.720 [2024-11-04 16:37:08.477834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.720 [2024-11-04 16:37:08.477842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.720 [2024-11-04 16:37:08.478016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.720 [2024-11-04 16:37:08.478190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.720 [2024-11-04 16:37:08.478200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.720 [2024-11-04 16:37:08.478208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.720 [2024-11-04 16:37:08.478215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.721 [2024-11-04 16:37:08.490515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.721 [2024-11-04 16:37:08.490813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.721 [2024-11-04 16:37:08.490832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.721 [2024-11-04 16:37:08.490841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.721 [2024-11-04 16:37:08.491015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.721 [2024-11-04 16:37:08.491189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.721 [2024-11-04 16:37:08.491200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.721 [2024-11-04 16:37:08.491208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.721 [2024-11-04 16:37:08.491215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.721 [2024-11-04 16:37:08.503516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.721 [2024-11-04 16:37:08.503881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.721 [2024-11-04 16:37:08.503900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.721 [2024-11-04 16:37:08.503908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.721 [2024-11-04 16:37:08.504088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.721 [2024-11-04 16:37:08.504262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.721 [2024-11-04 16:37:08.504273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.721 [2024-11-04 16:37:08.504281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.721 [2024-11-04 16:37:08.504287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.721 [2024-11-04 16:37:08.516582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.721 [2024-11-04 16:37:08.516992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.721 [2024-11-04 16:37:08.517012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.721 [2024-11-04 16:37:08.517021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.721 [2024-11-04 16:37:08.517195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.721 [2024-11-04 16:37:08.517369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.721 [2024-11-04 16:37:08.517379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.721 [2024-11-04 16:37:08.517387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.721 [2024-11-04 16:37:08.517394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.721 [2024-11-04 16:37:08.529534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.721 [2024-11-04 16:37:08.529902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.721 [2024-11-04 16:37:08.529921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.721 [2024-11-04 16:37:08.529930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.721 [2024-11-04 16:37:08.530104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.721 [2024-11-04 16:37:08.530277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.721 [2024-11-04 16:37:08.530287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.721 [2024-11-04 16:37:08.530294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.721 [2024-11-04 16:37:08.530301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.980 [2024-11-04 16:37:08.542720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.980 [2024-11-04 16:37:08.543106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.980 [2024-11-04 16:37:08.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.980 [2024-11-04 16:37:08.543136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.980 [2024-11-04 16:37:08.543310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.980 [2024-11-04 16:37:08.543486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.980 [2024-11-04 16:37:08.543502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.980 [2024-11-04 16:37:08.543510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.980 [2024-11-04 16:37:08.543518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.980 [2024-11-04 16:37:08.555703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.980 [2024-11-04 16:37:08.556113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.980 [2024-11-04 16:37:08.556133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.980 [2024-11-04 16:37:08.556142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.980 [2024-11-04 16:37:08.556316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.980 [2024-11-04 16:37:08.556491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.980 [2024-11-04 16:37:08.556502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.980 [2024-11-04 16:37:08.556510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.980 [2024-11-04 16:37:08.556517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.980 [2024-11-04 16:37:08.568669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.980 [2024-11-04 16:37:08.569007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.980 [2024-11-04 16:37:08.569025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.980 [2024-11-04 16:37:08.569033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.980 [2024-11-04 16:37:08.569205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.980 [2024-11-04 16:37:08.569378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.980 [2024-11-04 16:37:08.569389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.980 [2024-11-04 16:37:08.569396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.980 [2024-11-04 16:37:08.569402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.980 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.980 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:41.980 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.980 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.980 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.980 [2024-11-04 16:37:08.581706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.980 [2024-11-04 16:37:08.582000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.980 [2024-11-04 16:37:08.582018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.980 [2024-11-04 16:37:08.582026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.980 [2024-11-04 16:37:08.582198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.980 [2024-11-04 16:37:08.582377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.980 [2024-11-04 16:37:08.582387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.980 [2024-11-04 16:37:08.582394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.980 [2024-11-04 16:37:08.582402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.980 [2024-11-04 16:37:08.594702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.980 [2024-11-04 16:37:08.595049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.595067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.595075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.595247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.595420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.595430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.595437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.595444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 [2024-11-04 16:37:08.607769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 [2024-11-04 16:37:08.608110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.608128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.608136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.608309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.608482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.608492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.608499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.608505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.981 [2024-11-04 16:37:08.619293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.981 [2024-11-04 16:37:08.620817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 [2024-11-04 16:37:08.621174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.621192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.621203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.621376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.621551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.621561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.621567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.621574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.981 [2024-11-04 16:37:08.633886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 [2024-11-04 16:37:08.634260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.634278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.634285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.634458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.634635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.634646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.634653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.634660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 [2024-11-04 16:37:08.646988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 [2024-11-04 16:37:08.647404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.647422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.647429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.647607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.647782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.647792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.647799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.647806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 [2024-11-04 16:37:08.659962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 [2024-11-04 16:37:08.660355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.660373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.660390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.660564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.660745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.660756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.660763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.660771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 Malloc0 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.981 5031.83 IOPS, 19.66 MiB/s [2024-11-04T15:37:08.805Z] 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.981 [2024-11-04 16:37:08.674197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.981 [2024-11-04 16:37:08.674493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.981 [2024-11-04 16:37:08.674511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2500 with addr=10.0.0.2, port=4420 00:25:41.981 [2024-11-04 16:37:08.674521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2500 is same with the state(6) to be set 00:25:41.981 [2024-11-04 16:37:08.674701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2500 (9): Bad file descriptor 00:25:41.981 [2024-11-04 16:37:08.674876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:41.981 [2024-11-04 16:37:08.674887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:41.981 [2024-11-04 16:37:08.674895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:41.981 [2024-11-04 16:37:08.674902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.981 [2024-11-04 16:37:08.684897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.981 [2024-11-04 16:37:08.687182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:41.981 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.982 16:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2964516 00:25:41.982 [2024-11-04 16:37:08.792069] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:44.292 5736.29 IOPS, 22.41 MiB/s [2024-11-04T15:37:12.050Z] 6408.62 IOPS, 25.03 MiB/s [2024-11-04T15:37:12.984Z] 6976.78 IOPS, 27.25 MiB/s [2024-11-04T15:37:13.918Z] 7389.00 IOPS, 28.86 MiB/s [2024-11-04T15:37:14.853Z] 7744.27 IOPS, 30.25 MiB/s [2024-11-04T15:37:15.787Z] 8054.58 IOPS, 31.46 MiB/s [2024-11-04T15:37:16.721Z] 8287.77 IOPS, 32.37 MiB/s [2024-11-04T15:37:18.096Z] 8503.57 IOPS, 33.22 MiB/s 00:25:51.272 Latency(us) 00:25:51.272 [2024-11-04T15:37:18.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.272 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:51.272 Verification LBA range: start 0x0 length 0x4000 00:25:51.272 Nvme1n1 : 15.00 8683.57 33.92 11356.00 0.00 6367.99 608.55 26214.40 00:25:51.272 [2024-11-04T15:37:18.096Z] =================================================================================================================== 00:25:51.272 [2024-11-04T15:37:18.096Z] Total : 8683.57 33.92 11356.00 0.00 6367.99 608.55 26214.40 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.272 rmmod nvme_tcp 00:25:51.272 rmmod nvme_fabrics 00:25:51.272 rmmod nvme_keyring 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2965454 ']' 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2965454 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2965454 ']' 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2965454 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2965454 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2965454' 00:25:51.272 killing process with pid 2965454 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2965454 00:25:51.272 16:37:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2965454 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.531 16:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.434 16:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.434 00:25:53.434 real 0m25.816s 00:25:53.434 user 1m0.231s 00:25:53.434 sys 0m6.668s 00:25:53.434 16:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.434 16:37:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:53.434 ************************************ 00:25:53.434 END TEST nvmf_bdevperf 00:25:53.434 ************************************ 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.694 ************************************ 00:25:53.694 START TEST nvmf_target_disconnect 00:25:53.694 ************************************ 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:53.694 * Looking for test storage... 00:25:53.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:53.694 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.695 --rc genhtml_branch_coverage=1 00:25:53.695 --rc genhtml_function_coverage=1 00:25:53.695 --rc genhtml_legend=1 00:25:53.695 --rc geninfo_all_blocks=1 00:25:53.695 --rc geninfo_unexecuted_blocks=1 00:25:53.695 00:25:53.695 ' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.695 --rc genhtml_branch_coverage=1 00:25:53.695 --rc genhtml_function_coverage=1 00:25:53.695 --rc genhtml_legend=1 00:25:53.695 --rc geninfo_all_blocks=1 00:25:53.695 --rc geninfo_unexecuted_blocks=1 00:25:53.695 00:25:53.695 ' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.695 --rc genhtml_branch_coverage=1 00:25:53.695 --rc genhtml_function_coverage=1 00:25:53.695 --rc genhtml_legend=1 00:25:53.695 --rc geninfo_all_blocks=1 00:25:53.695 --rc geninfo_unexecuted_blocks=1 00:25:53.695 00:25:53.695 ' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.695 --rc genhtml_branch_coverage=1 00:25:53.695 --rc genhtml_function_coverage=1 00:25:53.695 --rc genhtml_legend=1 00:25:53.695 --rc geninfo_all_blocks=1 00:25:53.695 --rc geninfo_unexecuted_blocks=1 00:25:53.695 00:25:53.695 ' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.695 16:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.966 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.966 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.966 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.966 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.967 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:25:58.967 00:25:58.967 --- 10.0.0.2 ping statistics --- 00:25:58.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.967 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:58.967 00:25:58.967 --- 10.0.0.1 ping statistics --- 00:25:58.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.967 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:58.967 ************************************ 00:25:58.967 START TEST nvmf_target_disconnect_tc1 00:25:58.967 ************************************ 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:58.967 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.227 [2024-11-04 16:37:25.868944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.227 [2024-11-04 16:37:25.868997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160dab0 with addr=10.0.0.2, port=4420 00:25:59.227 [2024-11-04 16:37:25.869020] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:59.227 [2024-11-04 16:37:25.869033] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:59.227 [2024-11-04 16:37:25.869040] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:59.227 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:59.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:59.227 Initializing NVMe Controllers 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.227 00:25:59.227 real 0m0.109s 00:25:59.227 user 0m0.048s 00:25:59.227 sys 0m0.060s 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 ************************************ 00:25:59.227 END TEST nvmf_target_disconnect_tc1 00:25:59.227 ************************************ 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 ************************************ 00:25:59.227 START TEST nvmf_target_disconnect_tc2 00:25:59.227 ************************************ 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2970606 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2970606 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2970606 ']' 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.227 16:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 [2024-11-04 16:37:26.008712] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:25:59.227 [2024-11-04 16:37:26.008760] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.486 [2024-11-04 16:37:26.086768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.486 [2024-11-04 16:37:26.126252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.486 [2024-11-04 16:37:26.126292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.486 [2024-11-04 16:37:26.126300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.486 [2024-11-04 16:37:26.126305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.486 [2024-11-04 16:37:26.126310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.486 [2024-11-04 16:37:26.127855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:59.486 [2024-11-04 16:37:26.127942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:59.486 [2024-11-04 16:37:26.128028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:59.486 [2024-11-04 16:37:26.128030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.486 Malloc0 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.486 [2024-11-04 16:37:26.298975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.486 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 [2024-11-04 16:37:26.327196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2970641 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:59.745 16:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:01.657 16:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2970606 00:26:01.657 16:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 [2024-11-04 16:37:28.354931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Read completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.657 Write completed with error (sct=0, sc=8) 00:26:01.657 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 [2024-11-04 16:37:28.355149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 [2024-11-04 16:37:28.355350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Read completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 Write completed with error (sct=0, sc=8) 00:26:01.658 starting I/O failed 00:26:01.658 [2024-11-04 16:37:28.355542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.658 [2024-11-04 16:37:28.355722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.355742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.355850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.355862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.658 qpair failed and we were unable to recover it. 00:26:01.658 [2024-11-04 16:37:28.356972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.658 [2024-11-04 16:37:28.356984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.357943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.358981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.358991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.359947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.359956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.360940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.360951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.361082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.361093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.361158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.659 [2024-11-04 16:37:28.361168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.659 qpair failed and we were unable to recover it. 00:26:01.659 [2024-11-04 16:37:28.361251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.361983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.361996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.362909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.362919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.363990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.660 [2024-11-04 16:37:28.364683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.660 qpair failed and we were unable to recover it. 00:26:01.660 [2024-11-04 16:37:28.364766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.364780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.364961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.364975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.365881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.365894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.366883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.367939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.367953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.661 [2024-11-04 16:37:28.368986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.661 qpair failed and we were unable to recover it. 00:26:01.661 [2024-11-04 16:37:28.369059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.369953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.370924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.370937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.371864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.371876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.662 [2024-11-04 16:37:28.372022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.662 [2024-11-04 16:37:28.372034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.662 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.372884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.372895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.373977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.373991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.374918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.375901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.375912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.376003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.376014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.376092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.376104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.663 [2024-11-04 16:37:28.376292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.663 [2024-11-04 16:37:28.376304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.663 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.376914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.376925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.377987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.377999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.378865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.378876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.379910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.379922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.380119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.664 [2024-11-04 16:37:28.380132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.664 qpair failed and we were unable to recover it. 00:26:01.664 [2024-11-04 16:37:28.380285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.380964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.380975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.381808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.381819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.382955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.382967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.383996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.384955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.385056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.385143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.385159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.385304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.665 [2024-11-04 16:37:28.385321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.665 qpair failed and we were unable to recover it. 00:26:01.665 [2024-11-04 16:37:28.385487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.385504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.385652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.385670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.385840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.385857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.385964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.385981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.386966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.386981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.387835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.387993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.388908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.388920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.389953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.389965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.666 qpair failed and we were unable to recover it. 00:26:01.666 [2024-11-04 16:37:28.390053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.666 [2024-11-04 16:37:28.390065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.390947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.390964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.391923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.391935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.392974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.392987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.667 [2024-11-04 16:37:28.393652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.667 [2024-11-04 16:37:28.393665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.667 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.393761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.393773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.393842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.393854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.393983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.393995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.394875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.394887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.395087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.395232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.395443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.395621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.395779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.395968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.396121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.396339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.396560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.396747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.396901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.396934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.397130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.397163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.397275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.397309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.397430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.397464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.397666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.397701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.397888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.397922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.398052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.398306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.398339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.398447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.398480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.398663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.398697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.398833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.398866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.399826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.399998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.400032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.668 [2024-11-04 16:37:28.400223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.668 [2024-11-04 16:37:28.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.668 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.400391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.400548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.400582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.400782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.400815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.400951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.400986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.401097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.401130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.401310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.401343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.401464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.401497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.401692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.401727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.401993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.402946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.402958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.403912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.403945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.404946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.404958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.405919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.669 [2024-11-04 16:37:28.405952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.669 qpair failed and we were unable to recover it. 00:26:01.669 [2024-11-04 16:37:28.406069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.406902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.406936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.407871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.407897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.408067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.408101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.408288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.408322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.408459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.408493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.408673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.408708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.408959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.408993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.409167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.409211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.409303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.409316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.409451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.409464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.409617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.409660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.409800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.409833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.410024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.410057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.410343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.410376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.410508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.410541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.410708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.410744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.410928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.410961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.411091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.411124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.411368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.411402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.411591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.411819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.411831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.411925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.411937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.412859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.412892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.413089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.413121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.413238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.413284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.413379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.413392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.413523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.413535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.670 [2024-11-04 16:37:28.413668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.670 [2024-11-04 16:37:28.413681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.670 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.413761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.413771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.413860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.413872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.413934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.413945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.414977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.415862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.415895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.416145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.416178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.416311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.416344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.416468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.416704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.416739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.416924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.416958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.417101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.417198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.417346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.417537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.417767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.417981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.671 [2024-11-04 16:37:28.418976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.671 [2024-11-04 16:37:28.418987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.671 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.419894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.419926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.420851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.420861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.421478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.421498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.421583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.421594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.421674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.421686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.421758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.421769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.421917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.421929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.422942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.422975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.423955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.423966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.424032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.424044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.424126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.424137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.425396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.425416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.425562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.672 [2024-11-04 16:37:28.425575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.672 qpair failed and we were unable to recover it. 00:26:01.672 [2024-11-04 16:37:28.425652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.425663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.425899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.425910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.426085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.426097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.426186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.426219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.426405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.426437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.426556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.426595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.426813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.426846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.427865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.427877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.429691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.429714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.429976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.430146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.430306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.430549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.430706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.430879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.430912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.431105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.431139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.431318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.431545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.431557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.431769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.431804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.673 [2024-11-04 16:37:28.432944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.673 [2024-11-04 16:37:28.432977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.673 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.433156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.433188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.433420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.433433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.433526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.433537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.433621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.433632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.433733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.434746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.434768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.434956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.434968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.435134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.435167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.435286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.435444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.435478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.435708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.435743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.435863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.435895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.436083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.436379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.436510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.436713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.436864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.436982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.437014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.437143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.437189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.437259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.437269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.437334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.437503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.437536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.438638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.438660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.438767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.438779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.438909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.438921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.439052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.439064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.439227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.439239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.439788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.439811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.440892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.440925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.441124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.441157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.441286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.441319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.441439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.441477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.441560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.674 [2024-11-04 16:37:28.441571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.674 qpair failed and we were unable to recover it. 00:26:01.674 [2024-11-04 16:37:28.441657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.441669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.441868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.441902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.442147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.442376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.442411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.442551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.442584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.442779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.442813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.443985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.443995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.444062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.444073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.444728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.444749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.444932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.444966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.445197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.445270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.445487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.445523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.445666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.445701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.445829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.445861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.446047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.446080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.446257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.446290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.446476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.446510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.446746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.447700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.447728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.447830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.447847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.448016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.448033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.448102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.448118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.448228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.448243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.449930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.449997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.450013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.450096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.450374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.450408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.675 [2024-11-04 16:37:28.450588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.675 [2024-11-04 16:37:28.450631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.675 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.450745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.450778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.450915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.450947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.451057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.451089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.451263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.451297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.452530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.452789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.452808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.452910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.453054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.453216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.453519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.453551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.453738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.453772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.453950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.454884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.454925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.455125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.455316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.455484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.455648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.455823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.455970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.456116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.456577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.456748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.456911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.456942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.457119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.457151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.457286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.457328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.457454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.457487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.457683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.457717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.676 [2024-11-04 16:37:28.457839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.676 [2024-11-04 16:37:28.457872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.676 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.458052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.458085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.459073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.459103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.459415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.459712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.459747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.459950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.459983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.460188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.460221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.460359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.460377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.460536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.460568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.460776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.460847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.460991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.461805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.461994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.462935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.462965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.463909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.463926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.464930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.464946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.465087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.465108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.465275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.677 [2024-11-04 16:37:28.465292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.677 qpair failed and we were unable to recover it. 00:26:01.677 [2024-11-04 16:37:28.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.465553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.465650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.465736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.465816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.465891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.465901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.466950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.466961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.467961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.467973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.468987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.469062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.469072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.469136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.469147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.469231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.469242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.469383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.469395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.678 [2024-11-04 16:37:28.469530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.678 [2024-11-04 16:37:28.469541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.678 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.469622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.469634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.469779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.469790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.469878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.469889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.469959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.469970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.470934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.679 [2024-11-04 16:37:28.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.679 [2024-11-04 16:37:28.471092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.679 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.471911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.471922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.949 qpair failed and we were unable to recover it. 00:26:01.949 [2024-11-04 16:37:28.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.949 [2024-11-04 16:37:28.472985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.473929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.473939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.474979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.950 [2024-11-04 16:37:28.475639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.950 qpair failed and we were unable to recover it. 00:26:01.950 [2024-11-04 16:37:28.475711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.475737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.475817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.475828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.475894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.475905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.475965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.475977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.476966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.476977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.477907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.477917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.478720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.478731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.480470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.480495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.480650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.480663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.480736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.480748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.480862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.481113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.951 [2024-11-04 16:37:28.481135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.951 qpair failed and we were unable to recover it. 00:26:01.951 [2024-11-04 16:37:28.481310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.481329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.481535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.481626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.481643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.481842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.481857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.481988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.482151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.482185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.482377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.482410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.482626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.482661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.482778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.482790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.482988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.483022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.483160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.483193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.483404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.483445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.483665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.483678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.483822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.483855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.483967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.484000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.484143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.484176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.484430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.484463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.484643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.484678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.484857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.484891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.485184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.485422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.485575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.485662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.485842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.485979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.486152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.486331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.486553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.487132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.487164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.487341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.487374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.487576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.487801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.952 [2024-11-04 16:37:28.487812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.952 qpair failed and we were unable to recover it. 00:26:01.952 [2024-11-04 16:37:28.487882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.487893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.487979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.488010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.488195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.488229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.488418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.488451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.488583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.488595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.488711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.488724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.489411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.489432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.489575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.489587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.489685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.489697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.489797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.489808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.489893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.489904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.490687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.490709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.490929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.490943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.491262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.491431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.491597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.491772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.492198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.492351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.492438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.492591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.492778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.492811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.493941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.493951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.494027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.494038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.953 [2024-11-04 16:37:28.494113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.953 [2024-11-04 16:37:28.494124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.953 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.494209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.494220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.494360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.494528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.495159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.495179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.495360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.495394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.495506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.495540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.495780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.495816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.495994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.496988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.496998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.497841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.497851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.498937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.498970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.499153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.499186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.954 [2024-11-04 16:37:28.499316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.954 [2024-11-04 16:37:28.499348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.954 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.499529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.499561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.499707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.499743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.499936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.499969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.500816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.500998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.501950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.501982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.502111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.502143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.502282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.502314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.502583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.502633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.502760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.502793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.502967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.502999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.503888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.503921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.504976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.955 [2024-11-04 16:37:28.504987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.955 qpair failed and we were unable to recover it. 00:26:01.955 [2024-11-04 16:37:28.505144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.505830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.505840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.506922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.506934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.507822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.507833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.508921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.508998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.509008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.956 [2024-11-04 16:37:28.509082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.956 [2024-11-04 16:37:28.509092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.956 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.509978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.509989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.510933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.510944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.511958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.511969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.512032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.512043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-04 16:37:28.512102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.957 [2024-11-04 16:37:28.512113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.512948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.512959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.513931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.513942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.514970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.514982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-04 16:37:28.515685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.958 [2024-11-04 16:37:28.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.515765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.515777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.515932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.515944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.516915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.516925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.517974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.517985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.518847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.518885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-04 16:37:28.519005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.959 [2024-11-04 16:37:28.519045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.519875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.519893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.520969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.521881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.521893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.522898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.522909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.523054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.523066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.523127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.523138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.523269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.523280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.523416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.523428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.960 [2024-11-04 16:37:28.523557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.960 [2024-11-04 16:37:28.523568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.960 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.523697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.523709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.523774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.523785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.523863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.523875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.523967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.524941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.524954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.525936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.525947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.526898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.526910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.527100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.527245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.527460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.527561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.961 [2024-11-04 16:37:28.527668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.961 qpair failed and we were unable to recover it. 00:26:01.961 [2024-11-04 16:37:28.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.527831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.527906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.527917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.528947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.528958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.529976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.529987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.530856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.530868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.531981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.531992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.962 [2024-11-04 16:37:28.532738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.962 qpair failed and we were unable to recover it. 00:26:01.962 [2024-11-04 16:37:28.532868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.532960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.532972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.533899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.533995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.534898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.534909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.535856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.535994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.963 qpair failed and we were unable to recover it. 00:26:01.963 [2024-11-04 16:37:28.536721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.963 [2024-11-04 16:37:28.536733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.536818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.536949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.536960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.537965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.537976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.538905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.538917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.539061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.539073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.539208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.539220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.539419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.539431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.539643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.539656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.539885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.539897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.540843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.540988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.541000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.541093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.541235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.541247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.541433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.541445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.541540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.964 [2024-11-04 16:37:28.541552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.964 qpair failed and we were unable to recover it. 00:26:01.964 [2024-11-04 16:37:28.541700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.541712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.541778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.541790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.541877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.541890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.542963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.542975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.543943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.543953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.544984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.544996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.545928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.545940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.546014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.546027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.546159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.546171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.546309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.546321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.965 [2024-11-04 16:37:28.546401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.965 [2024-11-04 16:37:28.546414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.965 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.546481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.546493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.546642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.546655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.546755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.546767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.546861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.546874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.546962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.546975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.547971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.547982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.548866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.548878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.549979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.549991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.550928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.550941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.551033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.551046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.551112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.551124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.551185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.966 [2024-11-04 16:37:28.551197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.966 qpair failed and we were unable to recover it. 00:26:01.966 [2024-11-04 16:37:28.551280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.551434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.551589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.551749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.551835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.551980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.551993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.552817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.552830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.553954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.553967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.554950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.554963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.555948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.555959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.556103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.556115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.556196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.967 [2024-11-04 16:37:28.556207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.967 qpair failed and we were unable to recover it. 00:26:01.967 [2024-11-04 16:37:28.556369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.556581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.556692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.556845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.556926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.556937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.557954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.557964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.558932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.558944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.559879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.559890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.560030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.560040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.560216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.560228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.560378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.968 [2024-11-04 16:37:28.560389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.968 qpair failed and we were unable to recover it. 00:26:01.968 [2024-11-04 16:37:28.560519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.560529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.560730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.560742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.560818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.560828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.560959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.560971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.561851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.562977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.562988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.563907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.563919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.564923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.564995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.565006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.565243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.565253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.565386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.969 [2024-11-04 16:37:28.565397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.969 qpair failed and we were unable to recover it. 00:26:01.969 [2024-11-04 16:37:28.565555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.565567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.565702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.565713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.565883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.565894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.565980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.565990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.566940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.566951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe11af0 is same with the state(6) to be set 00:26:01.970 [2024-11-04 16:37:28.567689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.567871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.567887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.568894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.568905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.569941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.569951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.570021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.570175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.570186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.570258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.570270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.970 [2024-11-04 16:37:28.570360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.970 [2024-11-04 16:37:28.570371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.970 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.570522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.570534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.570616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.570826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.570837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.570899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.570910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.571888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.571898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.572848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.573844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.573996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.971 [2024-11-04 16:37:28.574007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:01.971 qpair failed and we were unable to recover it. 00:26:01.971 [2024-11-04 16:37:28.574093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.920128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.920456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.920497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.920755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.920766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.920854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.920867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.921066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.921097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.921247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.921279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.921440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.921558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.921589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.921898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.921909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.922093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.922105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.922255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.241 [2024-11-04 16:37:28.922288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.241 qpair failed and we were unable to recover it. 00:26:02.241 [2024-11-04 16:37:28.922490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.922524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.922785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.922821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.923016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.923048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.923176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.923210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.923372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.923385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.923533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.923546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.923767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.923780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.924877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.924910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.925103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.925135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.925300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.925421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.925454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.925734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.925769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.925907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.925941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.926067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.926100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.926339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.926514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.926526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.926788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.926824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.927874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.927886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.928887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.928919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.929130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.929164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.929349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.929382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.242 [2024-11-04 16:37:28.929645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.242 [2024-11-04 16:37:28.929680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.242 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.929983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.930116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.930150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.930419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.930452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.930593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.930786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.930819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.930988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.931199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.931232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.931383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.931598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.931643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.931779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.931812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.932114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.932315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.932349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.932643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.932678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.932802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.932815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.932949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.932961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.933948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.933960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.934801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.934834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.935083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.935117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.935245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.935278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.935411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.935444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.935739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.935773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.936007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.936179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.936213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.936329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.936362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.936573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.936805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.936840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.937053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.937314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.243 [2024-11-04 16:37:28.937347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.243 qpair failed and we were unable to recover it. 00:26:02.243 [2024-11-04 16:37:28.937545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.937556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.937629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.937641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.937807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.937842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.938896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.938929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.939177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.939333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.939345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.939555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.939587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.939818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.939857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.940055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.940087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.940209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.940243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.940364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.940397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.940588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.940632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.940873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.940885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.941047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.941059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.941210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.941222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.941298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.941310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.941526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.941558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.941863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.941898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.942086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.942120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.942378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.942411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.942595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.942641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.942770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.942798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.942890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.942902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.943867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.943879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.944016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.944050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.944250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.944282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.944461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.944495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.244 [2024-11-04 16:37:28.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.244 [2024-11-04 16:37:28.944707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.244 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.944871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.945772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.945846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.946931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.947148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.947182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.947299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.947343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.947477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.947509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.947648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.947684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.947806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.947840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.948846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.948970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.949004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.949263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.949296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.949435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.949469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.949719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.949895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.949929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.950065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.950097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.950297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.950330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.950551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.950562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.950712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.950949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.950962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.951046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.951057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.951199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.951461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.951494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.245 [2024-11-04 16:37:28.951687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.245 [2024-11-04 16:37:28.951721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.245 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.951861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.951891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.951959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.951970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.952918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.952992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.953965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.953976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.954911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.954924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.955019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.955254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.955453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.955486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.955676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.955712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.955942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.956118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.956152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.956340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.956374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.956609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.956720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.956754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.956874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.246 [2024-11-04 16:37:28.956908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.246 qpair failed and we were unable to recover it. 00:26:02.246 [2024-11-04 16:37:28.957054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.957087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.957353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.957387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.957653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.957689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.957813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.957848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.958946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.958958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.959916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.959949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.960088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.960121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.960243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.960276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.960496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.960685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.960720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.961119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.961152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.961387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.961567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.961617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.961761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.961776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.961857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.961868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.962972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.962985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.247 [2024-11-04 16:37:28.963909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.247 [2024-11-04 16:37:28.963920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.247 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.964035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.964266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.964425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.964656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.964833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.964994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.965027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.965213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.965247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.965494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.965527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.965720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.965755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.965879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.965892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.966044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.966056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.966261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.966274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.966362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.966374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.966616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.966843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.966877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.967898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.967909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.968235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.968429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.968590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.968775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.968967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.969001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.969282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.969471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.969505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.969633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.969647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.969795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.969808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.969967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.248 [2024-11-04 16:37:28.970801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.248 qpair failed and we were unable to recover it. 00:26:02.248 [2024-11-04 16:37:28.970941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.970953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.971849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.971996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.972940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.972951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.973208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.973240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.973430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.973464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.973587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.973758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.973770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.973909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.973921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.974906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.974999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.975173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.975394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.975550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.975756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.975856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.975867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.249 [2024-11-04 16:37:28.976669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.249 qpair failed and we were unable to recover it. 00:26:02.249 [2024-11-04 16:37:28.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.976767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.976851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.976863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.976946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.977098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.977109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.977262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.977275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.977454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.977580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.977622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.977872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.978064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.978096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.978365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.978398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.978515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.978526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.978765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.978777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.978925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.978937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.979975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.979994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.980156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.980189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.980309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.980342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.980504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.980522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.980697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.980735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.980939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.980970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.981150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.981182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.981403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.981436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.981573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.981826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.981860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.982965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.982998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.250 qpair failed and we were unable to recover it. 00:26:02.250 [2024-11-04 16:37:28.983173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.250 [2024-11-04 16:37:28.983206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.983380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.983414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.983553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.983588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.983843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.983878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.984073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.984107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.984249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.984542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.984554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.984697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.984874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.984907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.985039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.985072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.985279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.985313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.985431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.985662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.985697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.985925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.985938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.986083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.986095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.986260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.986486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.986519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.986707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.986742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.986884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.986916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.987048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.987081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.987200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.987234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.987358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.987391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.987513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.987548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.987817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.987857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.988945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.989119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.989153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.989289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.989322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.989447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.989491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.989675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.989687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.251 [2024-11-04 16:37:28.989827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.251 [2024-11-04 16:37:28.989839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.251 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.989975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.989987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.990122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.990155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.990371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.990404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.990523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.990557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.990816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.990966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.990977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.991980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.991991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.992103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.992137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.992330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.992364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.992483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.992517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.992714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.992736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.992829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.992849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.993003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.993020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.993168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.993186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.993429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.993464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.993578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.993622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.993813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.993847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.994941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.994953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.995872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.252 [2024-11-04 16:37:28.995906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.252 qpair failed and we were unable to recover it. 00:26:02.252 [2024-11-04 16:37:28.996031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.996065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.996238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.996272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.996461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.996496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.996738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.996772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.996880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.996912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.997115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.997150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.997336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.997369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.997555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.997588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.997720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.997752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.997918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.997930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.998866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.998900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.999023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.999057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.999323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.999544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.999578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.999695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.999740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:28.999886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:28.999899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.000056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.000262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.000418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.000642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.000797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.000989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.001023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.001199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.001231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.001407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.001440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.001671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.001706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.001895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.001907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.002040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.002052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.002258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.002291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.002470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.002504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.002752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.002787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.003031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.253 [2024-11-04 16:37:29.003063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.253 qpair failed and we were unable to recover it. 00:26:02.253 [2024-11-04 16:37:29.003170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.003203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.003442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.003455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.003611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.003641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.003833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.003866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.003974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.004007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.004255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.004288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.004538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.004572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.004715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.004750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.004850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.004862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.005844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.005970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.006004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.006202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.006236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.006435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.006469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.006638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.006672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.006837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.006994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.007027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.007312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.007346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.007489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.007522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.007713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.007727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.007814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.007827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.008879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.008913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.009041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.009252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.009285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.009562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.009596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.009735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.009768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.009899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.009932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.010110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.010144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.254 qpair failed and we were unable to recover it. 00:26:02.254 [2024-11-04 16:37:29.010342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.254 [2024-11-04 16:37:29.010374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.010479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.010512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.010652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.010687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.010824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.010836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.010939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.010951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.011098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.011381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.011412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.011544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.011577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.011834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.011868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.012067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.012102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.012341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.012374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.012501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.012534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.012725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.012760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.012977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.013942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.013953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.014118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.014130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.014218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.014230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.014431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.014464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.014735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.014747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.014813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.014824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.015073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.015292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.015455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.015702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.015913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.015991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.016004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.016068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.016079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.016225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.016259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.016531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-11-04 16:37:29.016564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.255 qpair failed and we were unable to recover it. 00:26:02.255 [2024-11-04 16:37:29.016728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.016741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.016808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.016820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.016972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.017236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.017271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.017510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.017651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.017685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.017879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.017911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.018084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.018282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.018433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.018621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.018845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.018984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.019018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.019209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.019241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.019437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.019470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.019599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.019615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.019823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.019980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.020013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.020206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.020238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.020500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.020533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.020693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.020918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.021895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.021928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.022131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.022163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.022310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.022345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.022527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.022561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.022696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.022848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.022882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.023019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.023052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.023203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.023215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.023436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.023474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.023588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.023631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.256 [2024-11-04 16:37:29.023835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.256 [2024-11-04 16:37:29.023868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.256 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.024995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.025169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.025349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.025519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.025697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.025915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.025949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.026066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.026099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.026283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.026316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.026497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.026530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.026731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.026765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.026959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.026993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.027120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.027154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.027267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.027301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.027549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.027581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.027720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.027754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.027964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.027975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.028943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.028954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.029114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.029126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.029268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.029280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.029521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.029553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.029717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.029729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.029858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.029869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.030058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.030333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.030367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.030556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.257 [2024-11-04 16:37:29.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.257 qpair failed and we were unable to recover it. 00:26:02.257 [2024-11-04 16:37:29.030702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.030716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.030854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.030866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.030942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.030954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.031911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.031923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.032850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.032971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.033128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.033284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.033510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.033731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.033886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.033920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.034100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.034132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.034393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.034427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.034654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.034688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.034862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.034874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.035934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.035967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.036149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.036163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.036255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.258 [2024-11-04 16:37:29.036266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.258 qpair failed and we were unable to recover it. 00:26:02.258 [2024-11-04 16:37:29.036480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.036513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.036779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.036815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.036938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.036977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.037182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.037214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.037340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.037372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.037551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.037584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.037788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.037883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.037894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.038863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.038986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.039197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.039366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.039581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.039679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.040014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.040048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.040264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.040527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.040561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.040760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.040795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.041882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.041956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.042178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.042215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.042420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.042454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.042636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.042673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.042920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.259 [2024-11-04 16:37:29.042953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.259 qpair failed and we were unable to recover it. 00:26:02.259 [2024-11-04 16:37:29.043147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.043315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.043348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.043539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.043573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.043711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.043745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.043919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.043937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.044935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.044948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.045790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.045823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.046040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.046073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.046194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.046227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.046503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.046633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.046668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.046805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.046839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.047120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.047276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.047419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.047663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.047753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.047981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.048192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.048489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.048643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.048797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.048954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.048986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.049165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.049198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.049353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.049426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.049674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.049715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.049905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.049939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.260 qpair failed and we were unable to recover it. 00:26:02.260 [2024-11-04 16:37:29.050121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.260 [2024-11-04 16:37:29.050138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.050310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.050327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.050481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.050499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.050725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.050857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.050869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.051956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.051967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.052979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.052990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.053868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.053988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.054021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.054183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.054194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.054345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.054379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.054682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.054715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.054898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.054910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.055055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.055067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.055160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.261 [2024-11-04 16:37:29.055259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.261 [2024-11-04 16:37:29.055269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.261 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.055965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.055976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.056980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.056991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.057877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.057887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.058967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.058978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.059116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.059283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.059316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.059563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.059595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.059858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.059870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.544 qpair failed and we were unable to recover it. 00:26:02.544 [2024-11-04 16:37:29.060853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.544 [2024-11-04 16:37:29.060886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.061789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.061821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.062955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.062967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.063106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.063118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.063312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.063324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.063546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.063578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.063721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.063755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.063875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.063907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.064037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.064212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.064435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.064620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.064809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.064995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.065204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.065344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.065571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.065794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.065964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.065997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.066183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.066216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.066394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.066426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.066706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.066876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.066888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.067969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.067981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.068153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.068164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.068361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.068374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.068436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.068446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.068547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.068580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.068838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.068870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.545 [2024-11-04 16:37:29.069004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.545 [2024-11-04 16:37:29.069038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.545 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.069940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.069972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.070079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.070237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.070271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.070460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.070492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.070674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.070836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.071867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.071899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.072950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.072983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.073903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.073915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.074174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.074343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.074506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.074843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.074988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.075854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.546 [2024-11-04 16:37:29.075978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.546 [2024-11-04 16:37:29.076018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.546 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.076157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.076190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.076311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.076344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.076524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.076557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.076782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.076816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.076950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.076982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.077157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.077190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.077320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.077353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.077457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.077507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.077725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.077759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.077945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.078106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.078117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.078215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.078227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.078434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.078468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.078670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.078705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.078895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.078928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.079824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.079856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.080856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.080867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.081903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.081938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.082080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.082290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.082549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.082763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.082859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.082999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.083032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.083169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.083201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.083364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.083566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.083597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.083826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.083860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.084080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.084112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.084304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.084338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.084581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.084625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.084750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.084763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.084900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.084912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.085043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.085055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.085831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.085929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.086025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.086036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.086128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.547 [2024-11-04 16:37:29.086162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.547 qpair failed and we were unable to recover it. 00:26:02.547 [2024-11-04 16:37:29.086450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.086484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.086731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.086767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.087040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.087072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.087261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.087295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.087613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.087648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.087768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.087801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.088926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.089105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.089248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.089282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.089481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.089514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.089697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.089710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.089845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.089857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.090017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.090049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.090229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.090261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.090379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.090412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.090595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.090644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.090909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.090920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.091052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.091224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.091262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.091443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.091653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.091687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.091848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.092964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.092997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.093947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.093979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.094878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.094889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.095867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.095879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.096890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.096923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.097036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.097069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.097186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.097226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.097343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.097376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.097558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.097591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.097775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.097808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.098056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.098089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.098260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.098429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.098461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.098643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.098678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.548 qpair failed and we were unable to recover it. 00:26:02.548 [2024-11-04 16:37:29.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.548 [2024-11-04 16:37:29.098899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.099021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.099054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.099242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.099275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.099504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.099672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.099706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.099899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.099932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.100049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.100093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.100324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.100335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.100471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.100504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.100625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.100659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.100778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.100811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.101016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.101181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.101214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.101322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.101355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.101540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.101573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.101804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.101876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.102141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.102177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.102379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.102413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.102590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.102640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.102959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.103031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.103205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.103245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.103451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.103573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.103634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.103757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.103791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.103972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.104188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.104334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.104542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.104959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.104992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.105890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.105902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.106141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.106333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.106365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.106629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.106663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.106776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.106809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.106930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.106962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.107167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.107199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.107388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.107421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.107703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.107737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.107920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.107931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.108086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.108228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.108261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.108441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.108475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.108670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.108704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.108950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.549 [2024-11-04 16:37:29.108983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.549 qpair failed and we were unable to recover it. 00:26:02.549 [2024-11-04 16:37:29.109182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.109194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.109282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.109315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.109519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.109552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.109738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.109772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.109953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.109965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.110197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.110419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.110452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.110667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.110702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.110874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.110946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.111152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.111188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.111310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.111458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.111734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.111770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.111969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.112001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.112184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.112419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.112452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.112673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.112707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.112886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.112919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.113038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.113071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.113195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.113228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.113360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.113392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.113635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.113679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.113855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.113887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.114002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.114034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.114290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.114323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.114435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.114468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.114648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.114680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.114883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.114915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.115052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.115071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.115292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.115322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.115502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.115532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.115856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.115965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.115996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.116188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.116219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.116393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.116425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.116576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.116788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.550 [2024-11-04 16:37:29.116806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.550 qpair failed and we were unable to recover it. 00:26:02.550 [2024-11-04 16:37:29.117064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.117097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.117367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.117399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.117614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.117649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.117893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.117926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.118042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.118075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.118262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.118294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.118481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.118513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.118706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.118741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.118873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.118905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.119050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.119083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.119346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.119363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.119534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.119772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.119790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.119937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.119953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.120145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.120177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.120367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.120399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.120591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.120634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.120751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.120783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.120896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.120928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.121190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.121223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.121432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.121594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.121633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.121850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.121942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.121957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.122925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.122942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.551 [2024-11-04 16:37:29.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.551 [2024-11-04 16:37:29.123221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.551 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.123449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.123462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.123616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.123628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.123729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.123799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.123826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.123997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.124882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.124892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.125129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.125311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.125344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.125485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.125518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.125703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.125736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.125923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.126130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.126162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.126279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.126312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.126524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.126565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.126713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.126889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.126921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.127047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.127079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.127350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.127383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.127557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.127590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.127793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.127827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.128083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.128117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.128301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.128333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.128470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.128503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.128721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.128927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.128960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.552 [2024-11-04 16:37:29.129780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.552 qpair failed and we were unable to recover it. 00:26:02.552 [2024-11-04 16:37:29.129943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.129984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.130248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.130282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.130416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.130449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.130652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.130686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.130894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.130927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.131122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.131154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.131333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.131366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.131645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.131679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.131857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.131890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.132858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.132891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.133020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.133052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.133242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.133274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.133412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.133444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.133640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.133674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.133941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.134219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.134251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.134463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.134497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.134768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.134808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.134943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.134954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.135039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.135223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.135501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.135741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.135886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.135992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.136025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.136219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.136252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.136473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.136484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.136630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.136642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.136842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.136874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.137075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.137109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.137348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.137584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.137625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.137815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.137848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.553 [2024-11-04 16:37:29.137971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.553 [2024-11-04 16:37:29.137982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.553 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.138916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.138949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.139169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.139295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.139328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.139441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.139473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.139737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.139772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.139913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.139946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.140198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.140210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.140389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.140400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.140492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.140503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.140693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.140924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.140958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.141881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.142925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.142937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.143162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.143173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.143318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.143351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.143534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.143566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.554 [2024-11-04 16:37:29.143730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.554 [2024-11-04 16:37:29.143765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.554 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.143891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.143924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.144052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.144086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.144329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.144504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.144537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.144697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.144732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.144897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.144909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.145065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.145098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.145408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.145440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.145566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.145610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.145799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.145831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.146949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.146982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.147141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.147339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.147373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.147670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.147849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.147882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.148063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.148167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.148178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.148322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.148355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.148594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.148636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.148811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.148843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.555 [2024-11-04 16:37:29.149703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.555 qpair failed and we were unable to recover it. 00:26:02.555 [2024-11-04 16:37:29.149864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.149876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.150889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.150901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.151064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.151455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.151855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.151989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.152022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.152204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.152237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.152448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.152480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.152697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.152732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.152869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.152902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.153989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.154184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.154196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.154372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.154384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.154535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.154547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.154698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.154737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.154917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.154949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.556 qpair failed and we were unable to recover it. 00:26:02.556 [2024-11-04 16:37:29.155944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.556 [2024-11-04 16:37:29.155954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.156028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.156173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.156327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.156544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.156771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.156981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.157014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.157143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.157176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.157312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.157343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.157622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.157656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.157868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.158055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.158088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.158335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.158368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.158578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.158621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.158909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.158942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.159136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.159148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.159255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.159288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.159432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.159465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.159696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.159939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.159951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.160090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.160122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.160306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.160339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.160630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.160665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.160801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.160834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.161024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.161057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.161296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.161308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.161496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.161508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.161647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.161659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.161740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.161750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.162006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.162039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.162170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.162202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.162317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.162617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.162651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.162827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.162860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.163040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.163052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.163134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.163144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.557 [2024-11-04 16:37:29.163221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.557 [2024-11-04 16:37:29.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.557 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.163295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.163305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.163450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.163484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.163627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.163661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.163849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.163884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.164965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.164976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.165152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.165184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.165408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.165441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.165622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.165656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.165874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.165906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.166124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.166357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.166391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.166673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.166707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.166920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.166954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.167100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.167133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.167257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.167292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.167537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.167569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.167836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.167909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.168076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.168113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.168297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.168314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.168467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.168482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.168655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.168862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.168894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.169027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.169060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.169272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.169313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.169512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.169524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.169732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.169744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.169908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.169920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.170011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.558 [2024-11-04 16:37:29.170022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.558 qpair failed and we were unable to recover it. 00:26:02.558 [2024-11-04 16:37:29.170099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.170110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.170321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.170335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.170474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.170507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.170701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.170736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.170880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.170912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.171937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.171969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.172115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.172154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.172371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.172553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.172586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.172742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.173034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.173047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.173209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.173243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.173425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.173459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.173673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.173709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.173885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.173919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.174085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.174247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.174443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.174635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.174852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.174977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.175810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.175987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.176019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.176211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.176245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.559 [2024-11-04 16:37:29.176377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.559 [2024-11-04 16:37:29.176411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.559 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.176611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.176645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.176771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.176806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.177020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.177052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.177183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.177215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.177419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.177453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.177647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.177681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.177930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.177965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.178068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.178079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.178147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.178158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.178308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.178343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.178533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.178565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.178758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.178831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.179976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.179992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.180127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.180139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.180222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.180233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.180326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.180358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.180614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.180648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.180838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.180870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.181882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.182018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.182051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.182172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.182205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.560 [2024-11-04 16:37:29.182460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.560 [2024-11-04 16:37:29.182494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.560 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.182675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.182710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.182892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.182925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.183110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.183142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.183360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.183373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.183532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.183566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.183695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.183728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.183970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.184738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.184983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.185020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.185197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.185361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.185378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.185556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.185588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.185842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.185875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.186063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.186097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.186352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.186387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.186561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.186595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.186794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.186828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.187022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.187055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.187318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.187337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.187589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.187610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.187770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.187802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.187948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.187992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.188183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.188216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.188412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.188445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.188624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.188659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.188910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.189021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.189039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.189305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.189322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.189414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.189430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.189569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.189586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.561 qpair failed and we were unable to recover it. 00:26:02.561 [2024-11-04 16:37:29.189701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.561 [2024-11-04 16:37:29.189718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.189900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.190850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.190884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.191156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.191191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.191329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.191362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.191559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.191591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.191866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.192885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.192919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.193975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.194245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.194278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.194474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.194486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.194578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.194589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.194747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.194759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.194925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.194959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.195094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.195128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.195239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.195272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.195452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.195467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.195529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.195540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.562 [2024-11-04 16:37:29.195675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.562 [2024-11-04 16:37:29.195710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.562 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.195895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.195928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.196117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.196150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.196348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.196360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.196499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.196511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.196659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.196671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.196828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.196862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.197068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.197101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.197248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.197282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.197526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.197558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.197677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.197712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.197888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.197920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.198108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.198296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.198454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.198489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.198613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.198648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.198836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.198870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.199007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.199039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.199216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.199249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.199533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.199627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.199638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.199850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.200027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.200062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.200274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.200307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.200491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.200523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.200717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.200791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.200938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.201159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.201194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.201309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.201342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.201525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.201559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.201821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.201858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.202104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.202136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.202421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.202439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.202586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.202609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.202771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.202806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.203007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.203040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.203162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.203197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.563 qpair failed and we were unable to recover it. 00:26:02.563 [2024-11-04 16:37:29.203385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.563 [2024-11-04 16:37:29.203419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.203614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.203649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.203875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.203909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.204042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.204077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.204244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.204429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.204681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.204724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.204866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.205908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.205940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.206933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.206945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.207956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.207988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.208855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.564 [2024-11-04 16:37:29.208996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.564 [2024-11-04 16:37:29.209009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.564 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.209816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.209828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.211897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.211910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.212056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.212069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.212268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.212279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.212440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.212594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.212643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.212789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.212823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.213028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.213340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.213374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.213500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.213535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.213726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.565 [2024-11-04 16:37:29.213760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.565 qpair failed and we were unable to recover it. 00:26:02.565 [2024-11-04 16:37:29.213953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.213987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.214113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.214147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.214287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.214319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.214509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.214681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.214715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.214902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.214935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.215078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.215113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.215317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.215352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.215589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.215604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.215756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.215769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.215855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.215888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.216026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.216060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.216234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.216267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.216520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.216553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.216753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.216786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.216924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.216957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.217093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.217127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.217316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.217330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.217478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.217510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.217693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.217727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.217843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.217875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.218061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.218074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.218280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.218314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.218438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.218471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.218654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.218689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.218816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.218852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.219951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.220133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.220166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.220352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.220386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.220565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.220597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.220784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.220818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.220954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.220989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.221204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.566 [2024-11-04 16:37:29.221414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.566 [2024-11-04 16:37:29.221427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.566 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.221579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.221625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.221803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.221838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.222035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.222070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.222190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.222223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.222361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.222394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.222577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.222633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.222885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.222917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.223027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.223039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.223176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.223188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.223255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.223267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.223415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.225736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.225771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.225899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.225932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.226118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.226128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.226358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.226391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.226517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.226549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.226747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.226782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.226920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.226952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.227879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.227891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.228033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.228045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.228251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.228285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.228414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.228448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.228642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.228678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.228899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.228931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.229053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.229092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.229224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.229257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.229505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.229539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.229747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.229959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.567 [2024-11-04 16:37:29.229993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.567 qpair failed and we were unable to recover it. 00:26:02.567 [2024-11-04 16:37:29.230189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.230222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.230419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.230453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.230583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.230624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.230889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.230924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.231117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.231345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.231379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.231498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.231532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.231775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.231810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.232106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.232380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.232663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.232676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.232823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.232835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.232970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.232981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.233074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.233086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.233262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.233415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.233447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.233629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.233663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.233843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.233876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.234920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.234955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.235149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.235161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.235305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.235318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.235453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.235486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.235623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.235764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.235798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.236004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.236038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.236172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.236204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.236453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.236487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.236669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.236703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.236833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.236865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.237046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.237079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.237208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.237245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.237376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.237410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.237613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.237648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.568 qpair failed and we were unable to recover it. 00:26:02.568 [2024-11-04 16:37:29.237847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.568 [2024-11-04 16:37:29.237879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.237991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.238022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.238206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.238240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.238489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.238522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.238771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.238805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.239829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.239841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.240927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.240959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.241090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.241125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.241315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.241347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.241539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.241573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.241794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.241827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.242103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.242267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.242450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.242620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.242883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.242997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.243129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.243277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.243712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.243929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.244143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.244176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.244427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.244460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.244656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.244692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.244903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.244937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.245078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.245117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.569 [2024-11-04 16:37:29.245235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.569 [2024-11-04 16:37:29.245268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.569 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.245395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.245427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.245630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.245642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.245791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.245803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.245877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.245888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.246901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.246914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.247126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.247159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.247334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.247367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.247543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.247674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.247953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.247987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.248237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.248270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.248449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.248481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.248752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.248766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.248941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.248951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.249183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.249215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.249408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.249443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.249628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.249663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.249888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.249960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.250123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.250160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.250366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.250401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.250521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.250554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.250788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.250823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.251093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.251126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.251228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.251246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.570 [2024-11-04 16:37:29.251434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.570 [2024-11-04 16:37:29.251467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.570 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.251616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.251650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.251859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.251891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.252089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.252246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.252503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.252713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.252757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.252936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.252970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.253148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.253193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.253283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.253300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.253451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.253494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.253678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.253714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.253900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.253933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.254130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.254164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.254388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.254569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.254612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.254800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.254835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.255021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.255055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.255248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.255265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.255450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.255484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.255636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.255672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.255889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.255921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.256946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.256979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.257159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.257192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.257380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.257413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.257544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.257577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.257768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.258080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.258112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.258408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.258449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.258718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.258759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.258930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.258951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.571 [2024-11-04 16:37:29.259129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.571 [2024-11-04 16:37:29.259143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.571 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.259963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.259996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.260125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.260158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.260342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.260375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.260488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.260521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.260698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.260738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.261923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.261955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.262231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.262264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.262406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.262440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.262734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.262939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.262973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.263182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.263194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.263330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.263363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.263486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.263519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.263720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.263756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.263951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.263984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.264117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.264150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.264344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.264379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.264502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.264534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.264664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.264698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.264923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.264956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.265166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.265199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.572 [2024-11-04 16:37:29.265368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.572 [2024-11-04 16:37:29.265381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.572 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.265556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.265588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.265782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.265815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.266008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.266041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.266171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.266215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.266487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.266509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.266623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.266642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.266865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.266901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.267015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.267047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.267228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.267263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.267456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.267468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.267621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.267654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.267853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.267886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.268947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.269822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.269855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.270032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.270066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.270257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.270289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.270492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.270526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.270734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.270770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.270969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.271002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.271188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.271223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.271420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.271453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.271729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.271888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.271921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.272117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.573 [2024-11-04 16:37:29.272152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.573 qpair failed and we were unable to recover it. 00:26:02.573 [2024-11-04 16:37:29.272346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.272381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.272580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.272621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.272729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.272980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.273193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.273226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.273354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.273387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.273516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.273551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.273678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.273713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.273844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.273886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.274891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.274926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.275105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.275140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.275266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.275299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.275514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.275548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.275681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.275701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.275860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.275877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.276119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.276165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.276396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.276525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.276560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.276779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.276813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.277009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.277043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.277178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.277212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.277371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.277384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.277532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.277566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.277771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.277806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.278025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.278059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.278183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.278364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.278399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.278594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.278641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.278884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.278917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.279049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.279084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.279209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.279255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.279401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.279420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.279568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.279611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.574 qpair failed and we were unable to recover it. 00:26:02.574 [2024-11-04 16:37:29.279808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.574 [2024-11-04 16:37:29.279842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.279976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.280211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.280355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.280512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.280649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.280867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.280904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.281103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.281146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.281328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.281511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.281545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.281761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.281797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.281931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.281965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.282145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.282178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.282316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.282350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.282568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.282614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.282790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.282824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.282943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.282976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.283154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.283186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.283302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.283334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.283460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.283494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.283625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.283660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.283788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.283822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.284049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.284253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.284287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.284469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.284486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.284560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.284719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.284732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.285834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.575 [2024-11-04 16:37:29.285867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.575 qpair failed and we were unable to recover it. 00:26:02.575 [2024-11-04 16:37:29.286111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.286144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.286257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.286290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.286514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.286526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.286700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.286735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.286962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.287093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.287127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.287398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.287431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.287540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.287574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.287720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.287755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.287880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.287913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.288077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.288298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.288451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.288581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.288823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.288990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.289161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.289318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.289504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.289918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.289952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.290849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.290861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.291079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.291114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.291227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.291261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.291450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.291484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.291672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.291707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.291833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.291868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.576 qpair failed and we were unable to recover it. 00:26:02.576 [2024-11-04 16:37:29.293245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.576 [2024-11-04 16:37:29.293279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.293522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.293555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.293767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.293802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.293926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.294231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.294264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.294399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.294433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.294625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.294665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.294887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.294921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.295177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.295210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.295401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.295413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.295560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.295594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.295736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.295771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.295882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.295916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.296056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.296089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.296369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.296401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.296696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.296709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.296917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.296950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.297079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.297112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.297322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.297356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.297489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.297522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.297709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.297744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.297925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.297958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.298082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.298115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.298326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.298359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.298550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.298562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.298757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.298769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.298918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.298930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.299075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.299108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.299230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.299263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.299566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.299756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.299791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.299919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.299953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.300123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.300135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.300380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.300391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.300537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.300549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.300694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.300706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.300847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.300879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.301002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.301035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.301288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.577 [2024-11-04 16:37:29.301324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.577 qpair failed and we were unable to recover it. 00:26:02.577 [2024-11-04 16:37:29.301470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.301482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.301715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.301871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.301906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.302957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.302970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.303927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.303962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.304933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.304967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.305149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.305182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.305384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.305418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.305616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.305651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.305828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.305862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.305990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.306024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.306160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.306195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.306404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.306438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.306682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.306718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.306853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.306888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.307871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.307904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.308023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.578 [2024-11-04 16:37:29.308056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.578 qpair failed and we were unable to recover it. 00:26:02.578 [2024-11-04 16:37:29.308246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.308410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.308618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.308699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.308838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.308912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.308923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.309960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.309995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.310109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.310141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.310326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.310360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.310545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.310578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.310716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.310749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.310936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.310970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.311236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.311270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.311449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.311482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.311659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.311693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.311922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.312062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.312096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.312305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.312340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.312527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.312559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.312681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.312715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.312858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.312890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.313858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.313869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.579 [2024-11-04 16:37:29.314056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.579 [2024-11-04 16:37:29.314087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.579 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.314962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.314996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.315919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.315932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.316947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.316982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.317170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.317204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.317385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.317419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.317660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.317695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.317903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.317935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.318158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.318314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.318550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.318692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.318852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.318999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.319032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.319213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.319247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.319515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.319526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.319711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.319932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.319966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.320215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.320249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.320730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.580 [2024-11-04 16:37:29.320766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.580 qpair failed and we were unable to recover it. 00:26:02.580 [2024-11-04 16:37:29.320954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.320988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.321230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.321339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.321435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.321871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.321994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.322218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.322454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.322541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.322750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.322838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.322849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.323913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.323996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.324867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.324879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.325838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.325870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.326109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.326143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.326300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.326494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.581 qpair failed and we were unable to recover it. 00:26:02.581 [2024-11-04 16:37:29.326648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.581 [2024-11-04 16:37:29.326691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.326815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.326828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.326894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.327903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.327922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.328136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.328155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.328265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.328283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.328456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.328489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.328627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.328664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.328780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.328814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.329021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.329055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.329247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.329280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.329518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.329747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.329767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.329862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.330984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.330998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.331851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.331972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.332006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.332209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.332244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.332381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.332617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.332652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.332780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.332812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.333016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.333051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.333178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.333213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.333391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.333410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.333672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.333708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.333907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.582 [2024-11-04 16:37:29.333940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.582 qpair failed and we were unable to recover it. 00:26:02.582 [2024-11-04 16:37:29.334068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.334104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.334303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.334337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.334551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.334595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.334764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.334777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.335035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.335256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.335485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.335698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.335857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.335998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.336030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.336222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.336254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.336390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.336424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.336656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.336924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.336958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.337206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.337239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.337459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.337492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.337703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.337737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.337865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.337898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.338016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.338048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.338193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.338227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.338454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.338558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.338591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.338871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.338905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.339086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.339119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.339264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.339296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.339429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.339462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.339722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.339758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.339886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.339920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.340956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.341120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.341132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.341267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.341434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.341468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.341714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.341748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.341958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.341991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.342206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.583 [2024-11-04 16:37:29.342239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.583 qpair failed and we were unable to recover it. 00:26:02.583 [2024-11-04 16:37:29.342417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.342449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.342571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.342612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.342755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.342766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.342839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.342850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.342992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.343219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.343294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.343451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.343658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.343824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.343858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.344095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.344128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.344388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.344420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.344614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.344627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.344717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.344729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.584 [2024-11-04 16:37:29.344883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.584 [2024-11-04 16:37:29.344895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.584 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.345930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.345941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.857 [2024-11-04 16:37:29.346744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.857 qpair failed and we were unable to recover it. 00:26:02.857 [2024-11-04 16:37:29.346881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.346893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.346970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.346981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.347858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.347869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.348949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.348961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.349955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.858 qpair failed and we were unable to recover it. 00:26:02.858 [2024-11-04 16:37:29.350953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.858 [2024-11-04 16:37:29.350964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.351123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.351204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.351420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.351613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.351821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.351997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.352151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.352364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.352682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.352897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.352935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.353133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.353167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.353283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.353317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.353441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.353474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.353736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.353749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.353829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.353841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.354928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.354941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.355839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.355976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.356009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.356134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.356167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.356366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.356398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.356586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.356637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.356893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.356906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.356990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.357023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.357165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.357198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.357329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.859 [2024-11-04 16:37:29.357361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.859 qpair failed and we were unable to recover it. 00:26:02.859 [2024-11-04 16:37:29.357569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.357612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.357802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.357814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.357961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.357994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.358246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.358279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.358509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.358640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.358653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.358854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.358866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.358937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.359803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.359981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.360200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.360481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.360748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.360896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.360909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.361030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.361063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.361186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.361219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.361402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.361436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.361565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.361599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.361868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.361901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.362080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.362114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.362324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.362357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.362614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.362649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.362884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.362995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.363028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.363207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.363240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.363353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.860 [2024-11-04 16:37:29.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.860 qpair failed and we were unable to recover it. 00:26:02.860 [2024-11-04 16:37:29.363423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.363433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.363571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.363614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.363732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.363765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.363869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.363901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.364943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.364977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.365125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.365159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.365386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.365685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.365708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.365799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.365818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.365989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.366168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.366186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.366353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.366371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.366560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.366596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.366788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.366821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.367948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.367981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.368986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.369077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.369222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.369401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.369559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.861 [2024-11-04 16:37:29.369797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.861 qpair failed and we were unable to recover it. 00:26:02.861 [2024-11-04 16:37:29.369885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.369896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.370936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.370969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.371941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.372914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.372989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.373002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.373083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.373094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.373248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.373281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.373462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.862 [2024-11-04 16:37:29.373613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.862 [2024-11-04 16:37:29.373647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.862 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.373773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.373800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.373974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.374008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.374199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.374233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.374355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.374387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.374580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.374627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.374820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.374854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.375067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.375100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.375288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.375321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.375515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.375549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.375748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.375761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.375832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.375843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.376058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.376279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.376313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.376445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.376478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.376684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.376696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.376848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.376882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.377067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.377100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.377349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.377384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.377569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.377581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.377743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.377991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.378025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.378159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.378191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.378328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.378362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.378562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.378595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.378816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.378849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.379968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.379980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.380072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.380082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.380220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.380232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.380432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.863 [2024-11-04 16:37:29.380459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.863 qpair failed and we were unable to recover it. 00:26:02.863 [2024-11-04 16:37:29.380531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.380541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.380687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.380699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.380768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.380779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.380950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.380963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.381033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.381075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.381287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.381321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.381513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.381547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.381684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.381696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.381837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.381849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.382076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.382089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.382316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.382327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.382459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.382470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.382637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.382671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.382854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.383048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.383361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.383519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.383746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.383994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.384191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.384341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.384554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.384812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.384918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.385101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.385114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.385201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.385234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.385437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.385471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.385661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.385696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.385992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.386154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.386380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.386546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.386679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.387040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.387073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.387207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.864 [2024-11-04 16:37:29.387240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.864 qpair failed and we were unable to recover it. 00:26:02.864 [2024-11-04 16:37:29.387533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.387566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.387681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.387694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.387779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.387790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.387947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.387960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.388844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.388855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.389933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.389966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.390892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.390904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.391008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.391269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.391550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.391583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.391792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.391965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.865 [2024-11-04 16:37:29.391999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.865 qpair failed and we were unable to recover it. 00:26:02.865 [2024-11-04 16:37:29.392174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.392209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.392396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.392428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.392554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.392809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.392821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.392906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.393016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.393049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.393257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.393290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.393483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.393517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.393796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.393812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.393970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.394136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.394666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.394762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.394900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.394913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.395953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.395965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.396107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.396119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.396263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.396296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.396475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.396689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.396875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.396909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.397151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.397185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.397437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.866 [2024-11-04 16:37:29.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.866 qpair failed and we were unable to recover it. 00:26:02.866 [2024-11-04 16:37:29.397581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.397593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.397767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.397803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.397980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.398142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.398287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.398570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.398731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.398920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.398955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.399069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.399103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.399306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.399339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.399472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.399506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.399687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.399723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.399917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.399950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.400187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.400220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.400421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.400454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.400716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.400752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.400863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.400896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.401139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.401351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.401444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.401644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.401806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.401996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.402029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.402153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.402186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.402363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.402396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.402572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.402616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.402855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.402866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.403035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.403070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.867 qpair failed and we were unable to recover it. 00:26:02.867 [2024-11-04 16:37:29.403247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.867 [2024-11-04 16:37:29.403282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.403458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.403497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.403663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.403675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.403826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.403859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.404045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.404079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.404200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.404235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.404419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.404454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.404573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.404584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.404799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.404832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.405010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.405320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.405353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.405544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.405557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.405700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.405734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.405932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.405967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.406235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.406269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.406454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.406488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.406657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.406841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.406878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.406954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.406964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.407170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.407204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.407316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.407349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.407539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.407573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.407788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.407821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.407953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.868 [2024-11-04 16:37:29.407985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.868 qpair failed and we were unable to recover it. 00:26:02.868 [2024-11-04 16:37:29.408159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.408437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.408470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.408701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.408736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.409951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.410112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.410145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.410389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.410424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.410620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.410661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.410794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.410807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.411945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.411975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.412127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.412272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.412494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.412662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.869 [2024-11-04 16:37:29.412840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.869 qpair failed and we were unable to recover it. 00:26:02.869 [2024-11-04 16:37:29.412984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.412996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.413088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.413100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.413233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.413267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.413449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.413481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.413731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.413765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.413950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.413984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.414969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.414980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.415956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.415969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.416946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.417221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.417255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.417378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.417411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.417629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.417642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.417785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.870 [2024-11-04 16:37:29.417797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.870 qpair failed and we were unable to recover it. 00:26:02.870 [2024-11-04 16:37:29.417879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.417889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.418938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.418951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.419927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.419961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.420079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.420114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.420327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.420563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.420716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.420751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.421007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.421040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.421171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.421204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.421381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.421414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.421622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.421657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.421791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.421824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.422011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.422044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.422231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.422263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.422452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.422484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.422678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.422713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.422958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.422993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.423174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.423207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.423466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.423499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.871 qpair failed and we were unable to recover it. 00:26:02.871 [2024-11-04 16:37:29.423778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.871 [2024-11-04 16:37:29.423813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.423951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.423985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.424107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.424146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.424386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.424575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.424627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.424739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.424961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.424994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.425205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.425237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.425368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.425402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.425616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.425797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.425830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.426025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.426273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.426306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.426447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.426480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.426675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.426717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.426963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.426976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.427943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.427976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.428099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.428373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.428407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.428528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.428560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.428747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.428783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.428897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.428929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.429186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.429220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.429409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.429442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.429668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.429741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.429959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.429979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.430135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.430154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.430262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.430281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.872 [2024-11-04 16:37:29.430389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.872 [2024-11-04 16:37:29.430408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.872 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.430585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.430608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.430710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.430723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.430809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.430820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.430959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.430972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.431142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.431175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.431310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.431343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.431523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.431556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.431687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.431721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.431840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.431878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.432914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.432925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.433986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.433998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.873 [2024-11-04 16:37:29.434745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.873 [2024-11-04 16:37:29.434758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.873 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.434941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.434975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.435099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.435132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.435295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.435485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.435520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.435629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.435655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.435910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.436124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.436348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.436524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.436686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.436867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.436976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.437947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.437959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.438871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.438883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.439005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.439233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.439245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.439477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.439491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.439728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.439765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.440011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.440045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.440232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.440264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.440535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.440576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.440783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.874 [2024-11-04 16:37:29.440995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-11-04 16:37:29.441028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.874 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.441157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.441191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.441375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.441408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.441677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.441713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.442138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.442425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.442459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.442579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.442622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.442919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.442954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.443136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.443323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.443489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.443901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.443984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.444199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.444423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.444590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.444779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.444952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.444987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.445175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.445208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.445321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.445355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.445589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.445630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.445752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.445784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.445944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.445962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.446112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.446224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.446250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.446328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.446345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.446553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.446570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.446769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.446785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.447054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.447179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.447212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.875 [2024-11-04 16:37:29.447403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-11-04 16:37:29.447437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.875 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.447635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.447671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.447857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.447890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.448979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.449045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.449056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.449280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.449294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.449439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.449451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.449529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.449540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.449759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.449793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.450019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.450051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.450234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.450268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.450534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.450567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.450830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.450850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.451965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.451977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.452065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.452075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.452266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.452300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.876 qpair failed and we were unable to recover it. 00:26:02.876 [2024-11-04 16:37:29.452426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-11-04 16:37:29.452459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.452663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.452676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.452774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.452925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.452959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.453960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.453971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.454104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.454137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.454275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.454307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.454422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.454456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.454674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.454708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.454900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.454933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.455140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.455174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.455417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.455451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.455752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.455951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.455985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.456947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.457167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.457179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.457328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.457339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.457455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.457489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.877 qpair failed and we were unable to recover it. 00:26:02.877 [2024-11-04 16:37:29.457729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.877 [2024-11-04 16:37:29.457764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.458909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.458922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.459942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.459953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.460092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.460103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.460250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.460263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.460483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.460496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.460642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.460677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.460913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.460925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.461122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.461155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.461302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.461342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.461637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.461680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.461829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.461841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.461925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.461936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.462009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.462020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.462235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.462268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.462470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.462504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.462784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.462817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.463059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.463245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.463257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.463427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.463570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.463582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.463850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.464067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.464100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.464435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.464724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.464760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.464984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.878 [2024-11-04 16:37:29.465019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.878 qpair failed and we were unable to recover it. 00:26:02.878 [2024-11-04 16:37:29.465220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.465252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.465503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.465537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.465817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.465997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.466031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.466214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.466248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.466442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.466475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.466738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.466751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.466981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.466993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.467217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.467228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.467398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.467612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.467625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.467776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.467789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.467949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.467960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.468022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.468033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.468252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.468284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.468429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.468641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.468855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.468888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.469229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.469261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.469519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.469553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.469762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.469795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.470038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.470050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.470220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.470260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.470530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.470563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.470769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.470802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.470989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.471022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.471258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.471530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.471565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.471785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.471930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.471943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.472145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.472178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.472378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.879 [2024-11-04 16:37:29.472410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.879 qpair failed and we were unable to recover it. 00:26:02.879 [2024-11-04 16:37:29.472680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.472714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.472952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.472964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.473113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.473124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.473275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.473287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.473441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.473475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.473660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.473695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.473890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.473922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.474197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.474430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.474443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.474636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.474649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.474788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.474800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.474942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.474973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.475099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.475131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.475336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.475372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.475598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.475615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.475828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.475840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.476099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.476406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.476445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.476715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.476750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.476959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.476992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.477227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.477239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.477381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.477394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.477622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.477656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.477971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.478006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.478215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.478227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.478424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.478436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.478621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.478875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.478906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.479104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.479136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.479332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.479365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.479625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.479818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.479829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.479933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.479945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.480144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.480305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.480316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.480395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.480422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.480713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.480747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.480874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.480907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.880 qpair failed and we were unable to recover it. 00:26:02.880 [2024-11-04 16:37:29.481085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.880 [2024-11-04 16:37:29.481118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.481308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.481341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.481582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.481626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.481813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.481825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.482214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.482477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.482639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.482941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.482952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.483548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.483572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.483800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.483816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.484898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.484912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.485976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.485989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.486974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.486988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.881 qpair failed and we were unable to recover it. 00:26:02.881 [2024-11-04 16:37:29.487550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.881 [2024-11-04 16:37:29.487562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.487637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.487648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.487737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.487819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.487830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.487895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.487907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.487997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.488903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.488924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.489953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.489966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.490862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.490874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.882 [2024-11-04 16:37:29.491747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.882 [2024-11-04 16:37:29.491759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.882 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.491978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.491991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.492916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.492929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.493859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.493871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.494934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.494946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.495903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.495915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.496056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.496069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.496158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.496229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.496241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.883 [2024-11-04 16:37:29.496319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.883 [2024-11-04 16:37:29.496332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.883 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.496529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.496624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.496636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.496715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.496725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.496906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.497895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.498914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.499829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.884 [2024-11-04 16:37:29.499843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.884 qpair failed and we were unable to recover it. 00:26:02.884 [2024-11-04 16:37:29.500046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.500884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.500895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.501966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.501979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.502987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.502999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.503966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.503978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.504065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.885 [2024-11-04 16:37:29.504077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.885 qpair failed and we were unable to recover it. 00:26:02.885 [2024-11-04 16:37:29.504154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.504956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.504968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.505958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.505969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.506797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.506808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.507962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.507972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.508116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.508128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.508277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.508289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.508439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.508450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.886 qpair failed and we were unable to recover it. 00:26:02.886 [2024-11-04 16:37:29.508532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.886 [2024-11-04 16:37:29.508545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.508616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.508628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.508719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.508731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.508795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.508808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.508885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.508898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.508964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.508975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.509956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.509969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.510947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.510958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.511913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.511926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.887 [2024-11-04 16:37:29.512865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.887 [2024-11-04 16:37:29.512876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.887 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.512954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.512967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.513944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.513955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.514986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.514999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.515912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.515923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.516940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.516978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.517067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.888 [2024-11-04 16:37:29.517086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.888 qpair failed and we were unable to recover it. 00:26:02.888 [2024-11-04 16:37:29.517173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.517941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.517952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.518869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.518880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.519956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.519968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.520939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.520950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.521032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.521043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.521125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.521137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.521334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.521347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.521420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.889 [2024-11-04 16:37:29.521432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.889 qpair failed and we were unable to recover it. 00:26:02.889 [2024-11-04 16:37:29.521584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.521596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.521779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.521791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.521926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.521937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.522966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.522977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.523917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.523928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.524920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.524931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.525910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.525922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.526071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.526082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.526157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.526169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.890 [2024-11-04 16:37:29.526300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.890 [2024-11-04 16:37:29.526312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.890 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.526469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.526538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.526550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.526622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.526633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.526714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.526725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.526923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.526935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.527958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.527992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.528216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.528425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.528733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.528777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.528930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.529105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.529138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.529314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.529346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.529589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.529682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.529806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.529839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.530019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.530052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.530243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.530461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.530493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.530686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.530721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.530957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.530968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.531072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.531106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.531239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.531270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.531545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.531577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.531782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.531815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.531923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.531955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.891 [2024-11-04 16:37:29.532907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.891 qpair failed and we were unable to recover it. 00:26:02.891 [2024-11-04 16:37:29.532990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.533863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.533981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.534013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.534186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.534219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.534470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.534655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.534678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.534854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.534886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.535129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.535161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.535356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.535388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.535637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.535672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.535881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.535892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.536152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.536185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.536372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.536405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.536594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.536666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.536851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.536885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.537113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.537392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.537487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.537653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.537793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.537988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.538867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.538878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.892 qpair failed and we were unable to recover it. 00:26:02.892 [2024-11-04 16:37:29.539947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.892 [2024-11-04 16:37:29.539957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.540110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.540122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.540250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.540263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.540386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.540418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.540542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.540575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.540889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.541158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.541197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.541474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.541506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.541713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.541748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.541976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.542170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.542203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.542405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.542439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.542628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.542662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.542838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.542850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.543010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.543042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.543525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.543558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.543828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.543862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.544041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.544053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.544200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.544233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.544481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.544515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.544830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.544864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.545143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.545155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.545380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.545392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.545565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.545779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.545814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.546016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.546263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.546296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.546536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.546773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.546808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.547072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.547084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.547294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.547328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.893 qpair failed and we were unable to recover it. 00:26:02.893 [2024-11-04 16:37:29.547517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.893 [2024-11-04 16:37:29.547549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.547842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.548002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.548034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.548302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.548335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.548636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.548850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.548885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.549147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.549180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.549401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.549440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.549614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.549633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.549802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.549835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.550077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.550110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.550382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.550414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.550550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.550583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.550807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.550841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.551028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.551079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.551317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.551334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.551557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.551570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.551768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.551781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.551936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.551948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.552089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.552101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.552302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.552335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.552577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.552619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.552916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.552950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.553210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.553243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.553469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.553502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.553725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.553760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.554937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.555140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.555174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.555417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.555450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.555984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.556018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.556205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.894 [2024-11-04 16:37:29.556239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.894 qpair failed and we were unable to recover it. 00:26:02.894 [2024-11-04 16:37:29.556520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.556532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.556682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.556694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.556787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.556799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.556877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.556973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.556985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.557200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.557212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.557293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.557303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.557558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.557570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.557793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.557806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.557956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.557968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.558187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.558199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.558366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.558378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.558510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.558522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.558741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.558754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.558995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.559167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.559369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.559459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.559832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.559997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.560160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.560246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.560471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.560681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.560852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.560864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.561853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.561865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.562007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.562019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.562242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.562253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.562391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.562403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.562625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.895 [2024-11-04 16:37:29.562660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.895 qpair failed and we were unable to recover it. 00:26:02.895 [2024-11-04 16:37:29.562906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.562939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.563132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.563165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.563407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.563441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.563687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.563722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.564051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.564317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.564351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.564644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.564679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.564945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.564978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.565267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.565301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.565570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.565615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.565791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.565824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.566065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.566077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.566294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.566306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.566529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.566541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.566763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.566776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.566917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.566950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.567217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.567250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.567497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.567530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.567718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.567753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.568002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.568014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.568231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.568246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.568418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.568636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.568670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.568940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.568974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.569264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.569297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.569475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.569507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.569756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.570062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.570074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.570285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.570298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.570443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.570455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.570532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.570543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.570829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.570863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.571067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.571108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.571306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.571318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.571589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.571843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.571877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.572139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.572172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.572457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.572469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-11-04 16:37:29.572620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.896 [2024-11-04 16:37:29.572649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.572807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.572840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.573116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.573149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.573275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.573320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.573558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.573589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.573793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.573828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.573957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.573990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.574256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.574289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.574559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.574593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.574857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.574891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.575190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.575223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.575529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.575561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.575702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.575737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.576032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.576065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.576293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.576519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.576531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.576667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.576908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.576942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.577184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.577218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.577485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.577517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.577642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.577675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.577948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.577983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.578270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.578284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.578479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.578491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.578635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.578647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.578792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.578804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.578963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.578996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.579188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.579221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.579436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.579469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.579749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.579785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.580032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.580065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.580248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.580281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.580532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.580800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.580835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-11-04 16:37:29.581053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.897 [2024-11-04 16:37:29.581086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.581282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.581294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.581447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.581458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.581596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.581615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.581709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.581914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.581948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.582231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.582265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.582455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.582467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.582682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.582694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.582831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.582843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.582986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.582998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.583116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.583149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.583335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.583369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.583557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.583590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.583804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.583836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.584027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.584221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.584390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.584535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.584784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.584987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.585022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.585279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.585313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.585561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.585852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.585886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.586077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.586110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.586355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.586389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.586577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.586620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.586889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.586922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.587102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.587142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.587629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.587664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.587863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.587897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.588161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.588173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.588345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.588570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.588614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.588908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.588941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.589193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.589216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.589429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-11-04 16:37:29.589462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.898 qpair failed and we were unable to recover it. 00:26:02.898 [2024-11-04 16:37:29.589731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.590011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.590045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.590182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.590216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.590484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.590517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.590802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.590837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.591888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.591923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.592069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.592102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.592436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.592755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.592790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.593074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.593108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.593354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.593388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.593651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.593686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.593929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.593941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.594116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.594150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.594430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.594463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.594686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.594721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.594909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.595186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.595220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.595328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.595362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.595620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.595655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.595861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.595897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.596156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.596189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.596468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.596502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.596710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.596744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.597020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.597054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.597334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.597374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.597638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.597673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.597882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.597915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.598081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.598093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.598297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.598330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.598535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.598568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.598845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.598880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.599125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.599158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.599296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.599329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.599593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.599636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.899 qpair failed and we were unable to recover it. 00:26:02.899 [2024-11-04 16:37:29.599831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.899 [2024-11-04 16:37:29.599864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.600113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.600147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.600415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.600449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.600628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.600663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.600930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.600964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.601214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.601247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.601386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.601420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.601665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.601699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.601971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.602004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.602294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.602519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.602532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.602682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.602695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.602843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.602855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.603001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.603014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.603236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.603550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.603583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.603722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.603756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.604010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.604044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.604343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.604416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.604427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.604645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.604680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.604891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.605078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.605112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.605298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.605332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.605580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.605657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.605951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.605984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.606181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.606215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.606482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.606516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.606697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.606732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.606999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.607042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.607210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.607225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.607448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.607460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.607739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.607774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.607970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.608005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.608269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.608302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.608507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.608519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.608724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.608759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.608965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.608977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.609137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.609170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.609385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.609419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.900 qpair failed and we were unable to recover it. 00:26:02.900 [2024-11-04 16:37:29.609595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.900 [2024-11-04 16:37:29.609638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.609901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.609934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.610211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.610245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.610464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.610498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.610774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.610809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.611057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.611091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.611203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.611213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.611434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.611466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.611712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.611746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.612213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.612225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.612365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.612377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.612579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.612620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.612816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.612849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.613052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.613086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.613321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.613333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.613574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.613773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.613806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.614017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.614288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.614321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.614507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.614540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.614795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.614830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.615043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.615077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.615267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.615279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.615368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.615379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.615580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.615592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.615792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.615804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.616030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.616063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.616336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.616370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.616625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.616648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.616795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.616810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.616975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.616987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.617178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.617431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.617464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.617756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.617789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.617993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.618027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.618275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.618309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.618560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.618794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.618941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.618975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.619222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.619255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.619582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.901 qpair failed and we were unable to recover it. 00:26:02.901 [2024-11-04 16:37:29.619891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.901 [2024-11-04 16:37:29.619925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.620174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.620188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.620439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.620451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.620716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.620728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.620920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.620933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.621173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.621185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.621315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.621327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.621552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.621796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.621831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.622115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.622298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.622331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.622539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.622573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.622831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.622865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.623129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.623163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.623363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.623374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.623464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.623476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.623671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.623711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.623910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.623943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.624232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.624267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.624506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.624539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.624828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.624863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.625060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.625093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.625354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.625388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.625612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.625625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.625876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.625888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.626060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.626073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.626230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.626264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.626444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.626477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.626753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.626793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.627089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.627325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.627337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.627510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.627522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.627791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.627984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.628018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.628213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.628246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.628390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.628428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.628570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.628583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.628738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.628772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.629038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.629071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.629270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.629308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.629441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.629454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.629641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.902 [2024-11-04 16:37:29.629676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.902 qpair failed and we were unable to recover it. 00:26:02.902 [2024-11-04 16:37:29.629969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.630003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.630186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.630198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.630382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.630413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.630597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.630655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.630933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.630967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.631231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.631264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.631472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.631505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.631773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.631808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.631986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.632020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.632293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.632326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.632502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.632539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.632694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.632707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.632943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.632977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.633231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.633264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.633580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.633592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.633746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.633759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.633900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.633933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.634159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.634192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.634490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.634524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.634702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.634926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.634959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.635098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.635130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.635336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.635349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.635563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.635597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.635909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.635943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.636108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.636120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.636278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.636317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.636497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.636530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.636811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.637086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.637119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.637345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.637591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.637610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.637854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.638102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.638114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.638355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.638435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.638446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.638591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.638610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.638766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.638779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.639031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.639271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.639371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.639580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.639819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.639996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.640028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.640206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.640246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.640399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.640412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.640610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.640622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.903 qpair failed and we were unable to recover it. 00:26:02.903 [2024-11-04 16:37:29.640855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.903 [2024-11-04 16:37:29.640889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.641088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.641122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.641377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.641410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.641652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.641687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.641872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.641905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.642124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.642158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.642350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.642383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.642639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.642652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.642794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.642806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.643057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.643090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.643384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.643417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.643621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.643634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.643862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.643875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.644099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.644132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.644402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.644436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.644646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.644660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.644871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.644904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.645089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.645123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.645400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.645434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.645697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.645738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.646025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.646059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.646315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.646462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.646474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.646682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.646717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.646842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.646873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.647168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.647201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.647450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.647483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.647665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.647702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.647986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.648264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.648302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.648371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.648383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.648621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.648944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.648978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.649252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.649285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.649413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.649448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.649683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.649696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.649926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.649938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.650097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.650109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.650318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.650331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.650595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.650862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.650895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.651163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.651196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.651493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.651526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.651822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.651857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.652052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.652087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.652338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.652372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.652666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.652712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.904 [2024-11-04 16:37:29.652969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.904 [2024-11-04 16:37:29.653003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.904 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.653280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.653313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.653495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.653518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.653670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.653682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.653930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.653964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.654162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.654196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.654393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.654442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.654671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.654685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.654912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.654924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.655072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.655238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.655251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.655467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.655499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.655680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.655715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.656002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.656040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.656180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.656209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.656434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.656446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.656689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.656724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.656883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.656917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.657154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.657405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.657439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.657721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.657755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.658038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.658339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.658635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.658672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.658858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.659071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.659104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.659377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.659391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.659455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.659466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.659701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.659736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.659961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.659995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.660317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.660541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.660823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.660858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.661137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.661176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.661392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.661403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.661555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.661568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.661730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.661743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.661952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.661986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.662188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.662221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.662353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.662393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.662675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.662822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.905 [2024-11-04 16:37:29.662856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.905 qpair failed and we were unable to recover it. 00:26:02.905 [2024-11-04 16:37:29.663051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.663371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.663519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.663634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.663785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.663970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.663982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.664186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.664199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.664415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.664427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.664658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.664682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.664869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.664903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.665129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.665163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.665453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.665487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.665691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.665705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.665849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.665861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.665998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.666010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.666160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.666206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.666465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.666500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.666809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.666845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.667029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.667064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.667251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.667284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.667554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.667567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:02.906 [2024-11-04 16:37:29.667715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.906 [2024-11-04 16:37:29.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:02.906 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.667970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.668978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.669241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.669255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.669394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.669407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.669631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.669645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.669798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.669811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.669991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.670212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.670379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.670560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.670664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.670774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.670786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.671021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.671035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.671199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.671212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.671442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.671457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.671679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.671694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.671911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.671926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.672222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.672310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.672324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.672470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.672484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.672633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.672646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.672902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.672916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.673134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.673147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.673289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.673558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.191 [2024-11-04 16:37:29.673614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.191 qpair failed and we were unable to recover it. 00:26:03.191 [2024-11-04 16:37:29.673759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.673793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.674069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.674104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.674318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.674332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.674578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.674843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.674878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.675030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.675065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.675298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.675333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.675622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.675660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.675871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.675904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.676181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.676214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.676402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.676437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.676648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.676683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.676952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.677125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.677157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.677456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.677487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.677762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.677799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.678013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.678048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.678268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.678303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.678521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.678536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.678707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.678742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.678965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.679000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.679183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.679220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.679397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.679410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.679660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.679696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.679897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.679932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.680126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.680165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.680364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.680506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.680520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.680755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.680792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.680983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.681019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.681240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.681275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.681484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.681518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.681803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.681839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.682119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.682151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.682435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.682470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.682727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.682764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.683019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.683053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.683353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.192 [2024-11-04 16:37:29.683386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.192 qpair failed and we were unable to recover it. 00:26:03.192 [2024-11-04 16:37:29.683640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.683655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.683902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.683937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.684214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.684248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.684534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.684546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.684800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.684845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.685065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.685099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.685309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.685322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.685486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.685520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.685810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.685846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.686114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.686151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.686411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.686447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.686734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.686747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.686957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.686970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.687081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.687095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.687338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.687373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.687573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.687616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.687836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.687869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.688124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.688158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.688280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.688313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.688561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.688751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.688787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.688996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.689030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.689309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.689346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.689624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.689661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.689945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.689980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.690198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.690233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.690514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.690547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.690776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.690817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.690968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.691004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.691199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.691234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.691437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.691674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.691711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.691910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.692131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.692163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.692427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.692461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.692700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.692713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.692946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.692960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.693172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.693185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.193 qpair failed and we were unable to recover it. 00:26:03.193 [2024-11-04 16:37:29.693364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.193 [2024-11-04 16:37:29.693376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.693637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.693674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.693949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.693985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.694285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.694624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.694660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.694939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.694974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.695187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.695223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.695427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.695463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.695722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.695737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.695902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.695915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.696088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.696101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.696313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.696347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.696656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.696691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.696982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.697016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.697160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.697196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.697445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.697459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.697638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.697903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.697937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.698082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.698117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.698372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.698406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.698700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.698737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.699008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.699042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.699265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.699299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.699507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.699543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.699833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.699848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.700007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.700021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.700162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.700176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.700338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.700573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.700616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.700751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.700795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.701003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.701038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.701197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.701231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.701527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.701564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.701856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.701891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.702098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.702133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.702412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.702448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.702695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.702731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.702986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.703021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.194 [2024-11-04 16:37:29.703315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-11-04 16:37:29.703350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.194 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.703623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.703659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.703871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.703908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.704093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.704129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.704415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.704450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.704654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.704669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.704887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.704923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.705120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.705154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.705430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.705465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.705738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.705752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.705940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.705975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.706235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.706271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.706542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.706575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.706802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.706838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.707143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.707388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.707401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.707639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.707676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.707882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.707917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.708080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.708116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.708304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.708339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.708551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.708585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.708779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.708794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.708941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.709182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.709196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.709411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.709427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.709590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.709610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.709828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.709843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.710883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.195 [2024-11-04 16:37:29.710897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.195 qpair failed and we were unable to recover it. 00:26:03.195 [2024-11-04 16:37:29.711131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.711146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.711223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.711235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.711476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.711489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.711631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.711645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.711814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.711829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.712049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.712062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.712179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.712193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.712460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.712686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.712701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.712860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.712874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.713932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.713945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.714091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.714106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.714261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.714276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.714450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.714464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.714640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.714654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.714794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.715059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.715074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.715228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.715242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.715400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.715576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.715590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.715759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.715774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.716975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.716989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.717104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.717119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.717286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.717301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.717461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.717475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.717713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.717729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.717906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.717919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.718074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.718087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.196 qpair failed and we were unable to recover it. 00:26:03.196 [2024-11-04 16:37:29.718268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.196 [2024-11-04 16:37:29.718284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.718521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.718758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.718774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.719010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.719024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.719181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.719195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.719378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.719393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.719613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.719627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.719842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.719856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.720077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.720090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.720329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.720342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.720516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.720531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.720744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.720758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.720928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.721155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.721169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.721309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.721324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.721486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.721500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.721711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.721874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.721889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.722910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.722923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.723086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.723100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.723357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.723370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.723541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.723699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.723715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.723859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.724049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.724062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.724244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.724258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.724474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.724488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.724677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.724691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.724997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.725185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.725368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.725619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.725814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.197 [2024-11-04 16:37:29.725978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.197 [2024-11-04 16:37:29.725991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.197 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.726060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.726074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.726310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.726407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.726419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.726634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.726649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.726790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.726804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.727056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.727069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.727301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.727315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.727569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.727584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.727761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.727774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.727956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.727972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.728120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.728133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.728403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.728559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.728572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.728807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.728820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.729909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.729922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.730162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.730175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.730421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.730436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.730674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.730688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.730928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.730943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.731033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.731045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.731291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.731304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.731453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.731690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.731705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.731846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.731861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.732086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.732101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.732326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.732339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.732533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.732547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.732739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.732980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.732994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.733156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.733171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.733277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.733291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.733503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.733516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.198 qpair failed and we were unable to recover it. 00:26:03.198 [2024-11-04 16:37:29.733686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.198 [2024-11-04 16:37:29.733702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.733924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.733940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.734147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.734161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.734321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.734338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.734543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.734557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.734783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.734797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.735967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.735980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.736133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.736146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.736326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.736339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.736549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.736563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.736754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.736767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.737946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.737960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.738119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.738131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.738335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.738347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.738488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.738689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.738806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.739011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.739026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.739210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.739255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11af0 (9): Bad file descriptor 00:26:03.199 [2024-11-04 16:37:29.739417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.739464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.739668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.739712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.739882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.739904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.740105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.740124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.740428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.740447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.740742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.740999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-11-04 16:37:29.741017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.199 qpair failed and we were unable to recover it. 00:26:03.199 [2024-11-04 16:37:29.741188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.741206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.741369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.741391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.741632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.741649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.741839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.741853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.742950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.742963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.743210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.743224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.743362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.743377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.743586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.743609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.743813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.743828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.744037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.744050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.744208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.744222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.744463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.744477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.744733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.744748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.744903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.744919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.745075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.745094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.745320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.745335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.745499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.745514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.745682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.745697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.745930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.745942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.746150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.746164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.746443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.746456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.746675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.746690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.746920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.746934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.747192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.747205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.747423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.747438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.747595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.747615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.747845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.747859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.747935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.747948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.748163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-11-04 16:37:29.748177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.200 qpair failed and we were unable to recover it. 00:26:03.200 [2024-11-04 16:37:29.748411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.748425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.748633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.748648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.748915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.749159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.749419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.749431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.749576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.749589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.749688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.749700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.749805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.750858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.750997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.751171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.751392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.751493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.751832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.751845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.752866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.752883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.753956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.754185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.754199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.754408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.754423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.754648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.754895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.754910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.755177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.755208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.755464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.755478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.201 qpair failed and we were unable to recover it. 00:26:03.201 [2024-11-04 16:37:29.755640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.201 [2024-11-04 16:37:29.755655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.755862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.755876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.756037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.756475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.756576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.756815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.757011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.757185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.757199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.757414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.757427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.757595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.757615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.757846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.757860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.758967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.758980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.759191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.759204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.759277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.759290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.759447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.759461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.759614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.759629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.759836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.759850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.760063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.760078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.760294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.760307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.760464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.760477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.760719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.760735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.760889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.761941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.761955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.762184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.762199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.202 [2024-11-04 16:37:29.762342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.202 [2024-11-04 16:37:29.762356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.202 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.762514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.762705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.762719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.762981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.763220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.763235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.763390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.763405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.763585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.763599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.763784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.763798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.764024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.764038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.764184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.764198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.764446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.764460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.764713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.764727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.764963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.764976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.765118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.765131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.765352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.765366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.765512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.765525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.765685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.765699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.765922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.765935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.766956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.766971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.768026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.768039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.768241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.768258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.768418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.768432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.768671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.768685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.768936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.768950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.769035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.769049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.769274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.769289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.769436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.203 [2024-11-04 16:37:29.769450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.203 qpair failed and we were unable to recover it. 00:26:03.203 [2024-11-04 16:37:29.769666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.769681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.769903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.769917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.770954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.770967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.771225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.771239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.771376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.771390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.771543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.771557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.771763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.771925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.771939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.772103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.772117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.772319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.772333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.772490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.772504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.772585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.772597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.772860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.772874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.773882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.773996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.774984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.774998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.775144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.775157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.204 qpair failed and we were unable to recover it. 00:26:03.204 [2024-11-04 16:37:29.775338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.204 [2024-11-04 16:37:29.775354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.775591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.775611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.775835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.775850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.776055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.776068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.776150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.776164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.776320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.776332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.776552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.776566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.776771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.776796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.777908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.777922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.778158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.778172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.778403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.778416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.778516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.778528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.778733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.778746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.778889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.778901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.779171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.779390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.779578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.779733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.779840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.779994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.780838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.780998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.781229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.781398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.781618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.781797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.781917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.781936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.782209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.782227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.782464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.782482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.205 [2024-11-04 16:37:29.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.205 [2024-11-04 16:37:29.782746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.205 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.782981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.783000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.783234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.783513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.783753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.783771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.783976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.783993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.784150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.784167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.784378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.784395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.784615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.784635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.784790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.784807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.784958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.784973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.785167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.785180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.785328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.785340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.785545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.785558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.785698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.785712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.785851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.785866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.786931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.786944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.787147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.787160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.787365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.787378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.787532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.787546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.787700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.787714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.787939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.787953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.788809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.788820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.206 [2024-11-04 16:37:29.789754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.206 [2024-11-04 16:37:29.789768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.206 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.789933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.789947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.790973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.790986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.791118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.791131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.791286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.791299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.791504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.791517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.791663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.791676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.791889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.791903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.792065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.792078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.792238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.792448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.792461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.792699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.792713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.792884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.792898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.793878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.793888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.794852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.794864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.795832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.795843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.796062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.796074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.796256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.796266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.207 [2024-11-04 16:37:29.796469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.207 [2024-11-04 16:37:29.796479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.207 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.796659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.796764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.796915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.796927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.797906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.797918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.798946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.798958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.799182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.799194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.799344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.799357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.799624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.799784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.799797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.800980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.800994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.801237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.801250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.801395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.801407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.801564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.801577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.801751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.801765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.801995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.802141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.802296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.802471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.802675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.802954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.802971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.803182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.803200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.803437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.803454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.803616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.208 [2024-11-04 16:37:29.803634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.208 qpair failed and we were unable to recover it. 00:26:03.208 [2024-11-04 16:37:29.803841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.803855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.804887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.804900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.805958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.805971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.806862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.806875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.807967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.807980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.808901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.808914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.809159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.209 [2024-11-04 16:37:29.809173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.209 qpair failed and we were unable to recover it. 00:26:03.209 [2024-11-04 16:37:29.809347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.809360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.809483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.809512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.809618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.809638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.809735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.809752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.809919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.809938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.810997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.811009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.811227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.811241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.811396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.811410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.811590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.811609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.811764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.811778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.812005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.812018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.812291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.812305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.812470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.812483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.812716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.812730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.812929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.812943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.813771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.813996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.814186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.814354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.814467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.814685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.814941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.814955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.815124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.815283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.815521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.815628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.210 [2024-11-04 16:37:29.815845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.210 qpair failed and we were unable to recover it. 00:26:03.210 [2024-11-04 16:37:29.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.815945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.816120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.816133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.816270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.816524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.816539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.816801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.816815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.816974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.816988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.817215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.817228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.817429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.817443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.817668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.817683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.817904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.817918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.818138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.818152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.818340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.818504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.818518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.818718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.818732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.818833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.818845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.819015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.819028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.819258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.819272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.819530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.819544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.819689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.819703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.819867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.819880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.820838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.820851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.821939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-11-04 16:37:29.822845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.211 qpair failed and we were unable to recover it. 00:26:03.211 [2024-11-04 16:37:29.822937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.822951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.823148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.823161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.823301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.823315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.823451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.823465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.823619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.823634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.823835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.823851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.824134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.824147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.824285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.824298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.824430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.824443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.824671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.824684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.824842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.825064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.825077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.825251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.825265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.825496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.825510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.825689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.825703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.825995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.826947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.826961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.827891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.828082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.828096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.828307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.828321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.828508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.828521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.828691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.828706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.828931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.828945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.829159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.829396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.829506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.829744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.829835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.212 [2024-11-04 16:37:29.829997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.212 [2024-11-04 16:37:29.830010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.212 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.830161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.830174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.830417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.830432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.830529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.830543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.830720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.830734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.830875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.830888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.831833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.831846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.832074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.832088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.832237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.832250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.832386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.832399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.832532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.832546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.832779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.832793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.833012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.833298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.833312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.833536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.833549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.833649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.833664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.833818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.833832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.834952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.834964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.835194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.835208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.835356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.835369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.835529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.835543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.835746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.835760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.835914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.835928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.836069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.836082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.836180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.213 [2024-11-04 16:37:29.836192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.213 qpair failed and we were unable to recover it. 00:26:03.213 [2024-11-04 16:37:29.836401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.836618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.836630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.836806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.836819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.836964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.836977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.837173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.837186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.837326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.837338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.837488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.837502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.837723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.837737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.837906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.837919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.838077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.838091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.838289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.838504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.838517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.838669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.838684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.838830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.838844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.839902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.839915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.840112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.840124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.840275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.840288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.840527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.840540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.840784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.840799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.840939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.840952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.841971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.841986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.842122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.842135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.842284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.842298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.842466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.842479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.842734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.842750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.842985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.843000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.843237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.843519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.214 [2024-11-04 16:37:29.843532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.214 qpair failed and we were unable to recover it. 00:26:03.214 [2024-11-04 16:37:29.843742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.843756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.843999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.844878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.844891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.845975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.845988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.846137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.846150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.846243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.846255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.846476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.846489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.846712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.846726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.846904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.846917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.847843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.847994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.848942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.848955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.849195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.849209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.849430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.849444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.849538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.849552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.849718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.849732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.849885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.849900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.215 [2024-11-04 16:37:29.850108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.215 [2024-11-04 16:37:29.850122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.215 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.850342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.850473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.850486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.850635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.850650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.850873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.850887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.851110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.851124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.851292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.851306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.851460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.851473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.851674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.851689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.851928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.851941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.852197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.852210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.852431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.852445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.852646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.852660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.852809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.852822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.853985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.853998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.854139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.854153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.854362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.854375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.854607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.854621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.854765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.854780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.854942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.854955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.855098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.855110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.855315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.855328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.855491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.855503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.855679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.855693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.855826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.855839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.856050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.856218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.856381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.856593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.216 [2024-11-04 16:37:29.856806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.216 qpair failed and we were unable to recover it. 00:26:03.216 [2024-11-04 16:37:29.856981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.856994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.857898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.857919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.858889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.858902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.859074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.859086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.859287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.859299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.859483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.859496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.859726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.859742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.859834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.860954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.860968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.861922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.861935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.217 [2024-11-04 16:37:29.862867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.217 qpair failed and we were unable to recover it. 00:26:03.217 [2024-11-04 16:37:29.862951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.862964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.863081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.863096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.863231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.863244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.863386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.863400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.863606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.863620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.863852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.864944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.864958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.865184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.865197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.865277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.865290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.865488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.865501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.865720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.865734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.865960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.865975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.866133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.866347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.866855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.866992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.867106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.867252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.867500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.867659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.867899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.868914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.218 [2024-11-04 16:37:29.868927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.218 qpair failed and we were unable to recover it. 00:26:03.218 [2024-11-04 16:37:29.869108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.869121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.869325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.869338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.869507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.869521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.869736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.869749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.869885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.869898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.870985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.871853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.871867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.872042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.872267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.872281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.872508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.872522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.872742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.873932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.873945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.874820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.874832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.875053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.875066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.875222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.875309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.875322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.875490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.219 [2024-11-04 16:37:29.875504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.219 qpair failed and we were unable to recover it. 00:26:03.219 [2024-11-04 16:37:29.875675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.875689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.875913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.875927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.876172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.876186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.876353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.876367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.876587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.876607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.876745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.876759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.876932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.876946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.877910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.877925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.878166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.878179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.878328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.878342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.878563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.878576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.878791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.878805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.878940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.878955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.879202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.879424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.879594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.879817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.879899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.879987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.880973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.880987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.881119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.881133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.881272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.881288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.881539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.881553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.881696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.881711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.881892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.881905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.882106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.882119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.882186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.882198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.882339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.220 [2024-11-04 16:37:29.882353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.220 qpair failed and we were unable to recover it. 00:26:03.220 [2024-11-04 16:37:29.882582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.882595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.882750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.882764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.882999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.883845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.884079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.884092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.884317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.884503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.884518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.884651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.884665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.884883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.884896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.885891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.885905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.886921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.886935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.887957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.887971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.888900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.888913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.221 [2024-11-04 16:37:29.889136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.221 [2024-11-04 16:37:29.889149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.221 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.889386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.889400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.889557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.889572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.889654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.889925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.889938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.890976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.891062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.891075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.891248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.891455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.891470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.891622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.891637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.891856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.891870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.892083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.892097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.892242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.892435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.892449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.892649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.892663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.892798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.892812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.893051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.893065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.893222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.893243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.893472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.893490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.893742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.893905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.893922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.894090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.894109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.894354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.894370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.894481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.894497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.894701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.894715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.894942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.894957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.895193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.895376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.895617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.895633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.895785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.895801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.895986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.895999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.896155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.896170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.896307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.896321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.896453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.222 [2024-11-04 16:37:29.896467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.222 qpair failed and we were unable to recover it. 00:26:03.222 [2024-11-04 16:37:29.896644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.896658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.896854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.896866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.897938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.897951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.898930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.898944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.899806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.899820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.900805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.901865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.902031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.902045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.902187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.902201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.902369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.902382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.902535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.902550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.223 [2024-11-04 16:37:29.902639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-11-04 16:37:29.902652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.223 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.902826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.902840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.902933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.902946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.903124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.903138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.903337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.903352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.903485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.903498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.903715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.903819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.903830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.904044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.904059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.904282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.904296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.904535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.904548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.904687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.904717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.904810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.904822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.905076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.905090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.905307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.905321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.905411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.905568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.905583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.905826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.905840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.906815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.906829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.907922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.907934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.908065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.908079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.908298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.908312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.908447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.908461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.908621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.908634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.908818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.909037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.909050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.909254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.909267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.909433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.909447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.909610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.909624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.909875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.909889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.224 [2024-11-04 16:37:29.910061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.224 [2024-11-04 16:37:29.910074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.224 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.910354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.910368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.910583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.910596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.910802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.910816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.911920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.911934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.912184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.912323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.912336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.912573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.912745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.912759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.912919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.912933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.913195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.913373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.913731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.913829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.914070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.914226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.914393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.914544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.914813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.914827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.915938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.915950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.916860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.916872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.917095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.917109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.917265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.917495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.225 [2024-11-04 16:37:29.917508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.225 qpair failed and we were unable to recover it. 00:26:03.225 [2024-11-04 16:37:29.917677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.917692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.917857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.917870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.918948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.918962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.919111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.919125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.919281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.919294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.919451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.919464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.919646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.919659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.919825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.919838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.920894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.920906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.921984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.921998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.922150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.922298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.922311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.922396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.922409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.922621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.922635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.922858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.923957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.923971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.924137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.924150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.924354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.924369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.226 [2024-11-04 16:37:29.924584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.226 [2024-11-04 16:37:29.924598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.226 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.924771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.924785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.924951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.924964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.925907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.925921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.926069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.926300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.926454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.926468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.926693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.926707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.926854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.926867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.927984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.927997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.928849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.928864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.929982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.929993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.930138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.930152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.930351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.227 [2024-11-04 16:37:29.930365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.227 qpair failed and we were unable to recover it. 00:26:03.227 [2024-11-04 16:37:29.930519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.930532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.930747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.930761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.930958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.930972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.931083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.931315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.931495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.931673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.931836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.931990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.932004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.932207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.932222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.932454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.932468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.932694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.932708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.932943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.932957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.933832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.934001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.934015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.934264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.934279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.934423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.934437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.934648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.934663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.934884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.934898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.935080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.935248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.935479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.935640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.935790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.935995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.936160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.936245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.936454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.936668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.936848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.936862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.937958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.937971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.938134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.938150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.938352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.938365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.938628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.938643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.938794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.228 [2024-11-04 16:37:29.938808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.228 qpair failed and we were unable to recover it. 00:26:03.228 [2024-11-04 16:37:29.938909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.938923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.939145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.939159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.939246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.939259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.939433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.939447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.939582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.939595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.939864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.939878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.940901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.940914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.941125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.941140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.941394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.941408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.941554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.941569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.941777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.941791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.941964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.941978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.942057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.942069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.942314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.942474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.942487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.942622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.942636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.942808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.942826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.943871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.943887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.944833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.944845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.945096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.945110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.945362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.945376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.945508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.945725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.945904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.945918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.946030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.946156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.946347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.946498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.229 [2024-11-04 16:37:29.946720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.229 qpair failed and we were unable to recover it. 00:26:03.229 [2024-11-04 16:37:29.946856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.946868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.947901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.947914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.948893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.948906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.949972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.949984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.950984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.950997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.951960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.951972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.952890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.952903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.953883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.953903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.954141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.954159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.230 [2024-11-04 16:37:29.954281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.230 [2024-11-04 16:37:29.954313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.230 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.954442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.954456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.954683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.954697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.954791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.954803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.954892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.955103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.955117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.955273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.955285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.955551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.955574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.955763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.955782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.955893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.955911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.956935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.956947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.957988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.957999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.958870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.958882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.959097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.959109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.959348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.959360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.959629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.959825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.959838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.959997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.960866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.960878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.961010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.961023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.961161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.961176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.961260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.961271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.961496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.961508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.231 [2024-11-04 16:37:29.961595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.231 [2024-11-04 16:37:29.961612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.231 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.961774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.961808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.961946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.961981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.962197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.962232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.962436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.962450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.962611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.962624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.962801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.962813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.963095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.963277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.963631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.963898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.963997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.964009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.964259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.964347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.964359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.964607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.964619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.964775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.964993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.965184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.965325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.965485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.965729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.965953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.965965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.966136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.966150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.966381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.966458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.966470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.966716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.966729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.966881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.966894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.967073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.967254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.967266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.967485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.967497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.967631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.967644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.967785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.968009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.968022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.968167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.968178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.968391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.968404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.968561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.968772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.968788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.969938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.969952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.970101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.970199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.970210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.970355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.970516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.232 qpair failed and we were unable to recover it. 00:26:03.232 [2024-11-04 16:37:29.970659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.232 [2024-11-04 16:37:29.970671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.970745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.970756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.970983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.971959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.971972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.972171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.972184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.972420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.972499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.972512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.972747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.972760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.972850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.972861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.973930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.973942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.974929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.974941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.975946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.975959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.976091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.976104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.976351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.976363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.976530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.976544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.976699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.976713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.976922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.976935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.977980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.977994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.978145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.978243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.978482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.978496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.978719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.978732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.978976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.978989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.233 qpair failed and we were unable to recover it. 00:26:03.233 [2024-11-04 16:37:29.979138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.233 [2024-11-04 16:37:29.979150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.979303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.979316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.979541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.979554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.979774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.979932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.979944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.980947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.980960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.981853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.982848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.982861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.983904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.983918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.984139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.984153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.984369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.984384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.984525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.984539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.984705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.984929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.984943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.985031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.985043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.985196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.985208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.985407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.985419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.985660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.985674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.985833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.985848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.986048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.986062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.986254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.986267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.986425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.986438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.986660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.986673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.234 [2024-11-04 16:37:29.986933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.234 [2024-11-04 16:37:29.986946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.234 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.987983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.987997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.988974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.988988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.989137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.989150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.989400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.989414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.235 [2024-11-04 16:37:29.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.235 [2024-11-04 16:37:29.989569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.235 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.989711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.989725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.989890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.989905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.989984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.989999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.990870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.990885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.991038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.991050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.991205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.991218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.991413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.991644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.991657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.991855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.991868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.992848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.992862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.993003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.993017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.993231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.993264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.993386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.993420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.993691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.993726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.993971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.994005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.994203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.994237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.994474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.994487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.994586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.994632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.994874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.994908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.995089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.995123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.995382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.995416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.522 [2024-11-04 16:37:29.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.522 [2024-11-04 16:37:29.995647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.522 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.995912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.995946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.996200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.996234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.996438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.996470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.996655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.996690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.996869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.996904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.997112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.997145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.997287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.997320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.997551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.997585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.997857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.997892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.998098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.998110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.998373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.998389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.998610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.998623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.998773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.998786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.998940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.998974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.999105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.999138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.999409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.999443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.999644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.999680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:29.999805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:29.999839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.000959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.001128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.001142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.001391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.001403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.001516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.001528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.001741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.001929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.001943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.002092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.002106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.002417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.002430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.002648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.002662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.002842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.002855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.002928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.002939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.003159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.003173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.003251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.523 [2024-11-04 16:37:30.003262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.523 qpair failed and we were unable to recover it. 00:26:03.523 [2024-11-04 16:37:30.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.003441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.003687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.003727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.003855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.003875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.004934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.005921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.005935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.006919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.006931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.007954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.007966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.524 [2024-11-04 16:37:30.008838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.524 qpair failed and we were unable to recover it. 00:26:03.524 [2024-11-04 16:37:30.008990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.009242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.009404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.009510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.009709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.009875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.009888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.010881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.010895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.011889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.011903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.012954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.012967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.013976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.013990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.014143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.014157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.014244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.014257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.014417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.525 [2024-11-04 16:37:30.014432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.525 qpair failed and we were unable to recover it. 00:26:03.525 [2024-11-04 16:37:30.014540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.014555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.014633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.014646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.014751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.014763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.014879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.014897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.014990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.015130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.015336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.015518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.015700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.015869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.015884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.016899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.016997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.017836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.017849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.018003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.018016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.018161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.018253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.018266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.018433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.018447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.526 qpair failed and we were unable to recover it. 00:26:03.526 [2024-11-04 16:37:30.018533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.526 [2024-11-04 16:37:30.018546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.018680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.018694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.018783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.018796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.019930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.019944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.020923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.020935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.021090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.021103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.021258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.021271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.021527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.021541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.021696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.021708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.021852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.021865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.022559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.022583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.022802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.022843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.023067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.023089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.023258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.023278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.023458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.023475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.023721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.023739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.023926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.023945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.024052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.024069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.024312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.024332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-04 16:37:30.024404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.527 [2024-11-04 16:37:30.024421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.024503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.024519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.024734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.024757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.024861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.024878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.025928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.025939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.026921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.026931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.027926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.027938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.028106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.028309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.028321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.028473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.028487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.028730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.028743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.029391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.029411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.029653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.029818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.029918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.029932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.030036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.030214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.030228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.030373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.030385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.030581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.030594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.030883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.030896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.031051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.031064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.031162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.031174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-04 16:37:30.031958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.528 [2024-11-04 16:37:30.031981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.032165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.032358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.032372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.032512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.032755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.032768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.032924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.032937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.033918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.033930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.034061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.034073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.034296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.034309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.034520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.034531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.034677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.034689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.034890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.034903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.035914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.035927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.036032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.036521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.529 [2024-11-04 16:37:30.036620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-04 16:37:30.036776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.036790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.036923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.036935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.036987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.036998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.037911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.037923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.038910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.039890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.040541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.040563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.040763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.040778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.040856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.040868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.040932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.040943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.041040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.041052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.041636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.041656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.041744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.041756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.041824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.041837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.042055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.042068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.530 [2024-11-04 16:37:30.042690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.530 [2024-11-04 16:37:30.042712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.042805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.042818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.042961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.042977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.043930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.043941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.044978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.044990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.045880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.046039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.046160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.046339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.046441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.531 [2024-11-04 16:37:30.046544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.531 qpair failed and we were unable to recover it. 00:26:03.531 [2024-11-04 16:37:30.046645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.046735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.046747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.046824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.046837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.046933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.046947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.047901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.047998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.048985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.048996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.532 [2024-11-04 16:37:30.049875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.532 qpair failed and we were unable to recover it. 00:26:03.532 [2024-11-04 16:37:30.049941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.049952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.050918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.050996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.051925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.051996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.052978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.052991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.053063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.053138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.533 [2024-11-04 16:37:30.053148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.533 qpair failed and we were unable to recover it. 00:26:03.533 [2024-11-04 16:37:30.053223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.053916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.053926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.054792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.054803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.055423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.055444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.055675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.055689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.055886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.055898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.055956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.055967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.056777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.056793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.534 [2024-11-04 16:37:30.057728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.534 qpair failed and we were unable to recover it. 00:26:03.534 [2024-11-04 16:37:30.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.057818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.057914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.057925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.057994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.058850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.059886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.059900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.060913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.060925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.535 [2024-11-04 16:37:30.061584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.535 qpair failed and we were unable to recover it. 00:26:03.535 [2024-11-04 16:37:30.061664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.061677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.061741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.061754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.061900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.062948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.062961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.063976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.063987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.064979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.064991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.065080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.536 [2024-11-04 16:37:30.065093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.536 qpair failed and we were unable to recover it. 00:26:03.536 [2024-11-04 16:37:30.065233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.065925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.065937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.066951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.066963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.067905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.068864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.068995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.537 [2024-11-04 16:37:30.069008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.537 qpair failed and we were unable to recover it. 00:26:03.537 [2024-11-04 16:37:30.069082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.069884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.069897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.070947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.070959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.071935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.071947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.538 [2024-11-04 16:37:30.072816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.538 [2024-11-04 16:37:30.072828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.538 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.072963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.072974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.073951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.073964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.074909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.074988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.075134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.075214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.075291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.075379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.075610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.075622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.076268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.076290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.076434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.076446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.076690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.076702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.076782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.076794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.076992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.539 [2024-11-04 16:37:30.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.539 qpair failed and we were unable to recover it. 00:26:03.539 [2024-11-04 16:37:30.078165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.078176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.078427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.078440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.078609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.078622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.078703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.078715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.078863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.078875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.079875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.079887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.080944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.080955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.081964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.081976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.082071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.082083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.082316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.082328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.082461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.082474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.082718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.082730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.082901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.082914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.083941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.083958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.540 [2024-11-04 16:37:30.084028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.540 [2024-11-04 16:37:30.084041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.540 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.084972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.084984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.085188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.085365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.085634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.085929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.085997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.086097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.086409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.086624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.086721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.086936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.086949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.087085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.087096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.087339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.087351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.087567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.087580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.087748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.087760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.087963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.087975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.088941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.088953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.089045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.089057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.089223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.089234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.089370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.089533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.541 [2024-11-04 16:37:30.089545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.541 qpair failed and we were unable to recover it. 00:26:03.541 [2024-11-04 16:37:30.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.089721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.089815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.089827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.090459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.090481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.090631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.090647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.090820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.090833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.090979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.090991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.091870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.091883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.092956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.092968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.093924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.093938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.094921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.095831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.095846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.542 [2024-11-04 16:37:30.096069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.542 [2024-11-04 16:37:30.096083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.542 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.096982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.096995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.098870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.098882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.099985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.099999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.100070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.100180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.100193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.100384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.100397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.100616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.100759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.100772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.101016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.101028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.101159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.101171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.543 qpair failed and we were unable to recover it. 00:26:03.543 [2024-11-04 16:37:30.101248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.543 [2024-11-04 16:37:30.101259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.101397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.101410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.101543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.101555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.101793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.101808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.101975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.102080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.102092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.102308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.102970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.102991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.103967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.103977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.104933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.104946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.105109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.105121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.105785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.105972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.105985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.106911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.106922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.107055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.107067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.107309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.107321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.107542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.107806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.107820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.107918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.107930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.108065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.108077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.108155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.108167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.108461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.108474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.544 qpair failed and we were unable to recover it. 00:26:03.544 [2024-11-04 16:37:30.108648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.544 [2024-11-04 16:37:30.108660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.109309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.109330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.109590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.109607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.109835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.109847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.109962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.109972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.110918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.110932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.111926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.111938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.112975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.112987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.113919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.113931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.114020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.114119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.545 [2024-11-04 16:37:30.114131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.545 qpair failed and we were unable to recover it. 00:26:03.545 [2024-11-04 16:37:30.114221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.114373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.114451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.114611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.114695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.114923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.114936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.115819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.115831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.116100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.116113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.116389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.116402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.116626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.116639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.116787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.116802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.116947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.116960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.117177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.117432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.117444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.117629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.117641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.117716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.117727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.117829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.118850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.118987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.119000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.119152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.119164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.119430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.119443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.119533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.546 [2024-11-04 16:37:30.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.546 qpair failed and we were unable to recover it. 00:26:03.546 [2024-11-04 16:37:30.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.119691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.119827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.119840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.119960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.119972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.120138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.120151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.120417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.120429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.120516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.120529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.120724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.120736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.120919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.120931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.121948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.121981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.122173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.122207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.122538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.122747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.122783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.122983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.123016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.123210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.123243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.123511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.123544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.123839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.123874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.124113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.124303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.124336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.124545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.124579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.124722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.124946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.124980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.125130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.125410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.125657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.125841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.125874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.126051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.126085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.126338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.126350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.126554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.126567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.127289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.127310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.127561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.547 [2024-11-04 16:37:30.127824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.547 [2024-11-04 16:37:30.127897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.547 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.128046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.128083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.128296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.128332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.128610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.128646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.128840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.128874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.129057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.129092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.129302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.129336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.129582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.129626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.129811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.129844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.130919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.130956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.131178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.131429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.131462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.131658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.131678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.131862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.131896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.132144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.132352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.132387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.132533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.132568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.132727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.132763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.132901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.132919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.133128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.133389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.133430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.133557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.133592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.133815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.133849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.133974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.134188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.134398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.134699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.134866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.134951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.134962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.135110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.135122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.135289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.135302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.135452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.135464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.135635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.135648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.135739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.548 [2024-11-04 16:37:30.135773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.548 qpair failed and we were unable to recover it. 00:26:03.548 [2024-11-04 16:37:30.136016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.136250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.136466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.136690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.136849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.136955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.137928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.138039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.138248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.138456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.138619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.138835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.138994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.139142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.139612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.139841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.139944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.139955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.140204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.140217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.140459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.140491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.140755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.140790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.140988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.141217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.141378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.141525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.141845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.141961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.141974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.142122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.142136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.142274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.142286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.142446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.142480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.142708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.142745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.142936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.142970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.143240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.143273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.549 [2024-11-04 16:37:30.143451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.549 [2024-11-04 16:37:30.143486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.549 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.143634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.143669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.143880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.143914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.144117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.144151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.144436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.144449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.144645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.144658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.144799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.144811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.145024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.145036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.145178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.145212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.145514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.145782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.145817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.146962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.146977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.147828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.147967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.148001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.148310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.148344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.148474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.148507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.148750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.148785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.148912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.149074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.149108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.149310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.149344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.149577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.149915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.150127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.150140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.150361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.150373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.550 [2024-11-04 16:37:30.150558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.550 [2024-11-04 16:37:30.150592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.550 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.150865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.150899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.151144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.151177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.151527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.151561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.151817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.151852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.152105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.152138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.152443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.152477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.152730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.152764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.152904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.152917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.153080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.153094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.153241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.153254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.153414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.153428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.153654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.153689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.153903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.153937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.154193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.154228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.154513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.154589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.154604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.154821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.154833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.155004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.155037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.155263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.155297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.155529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.155780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.155814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.156081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.156120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.156435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.156478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.156726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.156738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.156878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.156891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.157071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.157104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.157287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.157321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.157499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.157532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.157815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.157828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.157932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.157944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.158119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.158152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.158443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.158475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.158689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.158723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.158969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.159003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.159233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.159439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.159451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.159683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.159718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.551 qpair failed and we were unable to recover it. 00:26:03.551 [2024-11-04 16:37:30.159927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.551 [2024-11-04 16:37:30.159961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.160172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.160204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.160498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.160532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.160669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.160704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.160905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.160937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.161119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.161153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.161399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.161432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.161678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.161714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.161976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.161989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.162962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.163095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.163128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.163343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.163389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.163628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.163641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.163873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.163886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.163980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.163992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.164958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.164992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.165240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.165273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.165512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.165526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.165610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.165623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.165777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.165790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.165932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.165945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.166084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.166097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.166266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.166278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.166464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.166590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.166661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.166791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.166824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.167085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.167119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.167242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.552 [2024-11-04 16:37:30.167276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.552 qpair failed and we were unable to recover it. 00:26:03.552 [2024-11-04 16:37:30.167538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.167571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.167764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.167777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.167941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.167954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.168131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.168144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.168396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.168429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.168614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.168643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.168862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.168896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.169024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.169059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.169269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.169301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.169540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.169562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.169777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.169813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.170063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.170097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.170289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.170322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.170475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.170488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.170622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.170636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.170853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.170865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.171008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.171020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.171169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.171202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.171450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.171487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.171624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.171657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.171852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.171865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.172009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.172041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.172223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.172256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.172510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.172543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.172753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.172766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.172865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.172877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.173975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.173986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.174255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.174291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.174565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.174599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.174817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.174829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.175003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.175032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.175254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.175287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.175489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.553 [2024-11-04 16:37:30.175523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.553 qpair failed and we were unable to recover it. 00:26:03.553 [2024-11-04 16:37:30.175642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.175693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.175840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.175854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.176090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.176232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.176245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.176473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.176506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.176719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.176909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.176922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.177158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.177252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.177462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.177659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.177792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.177999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.178033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.178305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.178351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.178526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.178539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.178687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.178700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.178949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.178984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.179267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.179500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.179534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.179775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.179788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.179895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.179907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.180050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.180063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.180220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.180441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.180473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.180833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.180869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.181122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.181330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.181364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.181508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.181543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.181857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.181897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.182080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.182113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.182245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.182278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.182535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.182548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.182733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.182746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.182848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.182880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.183057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.183090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.183299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.183333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.183585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.554 [2024-11-04 16:37:30.183597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.554 qpair failed and we were unable to recover it. 00:26:03.554 [2024-11-04 16:37:30.183679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.183691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.183900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.183933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.184177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.184211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.184396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.184607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.184621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.184822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.184836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.185007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.185042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.185334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.185368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.185490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.185523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.185708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.185744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.185999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.186033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.186171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.186206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.186443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.186693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.186707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.186849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.186864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.187865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.187899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.188142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.188176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.188354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.188387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.188592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.188641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.188869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.188882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.189128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.189163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.189413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.189446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.189560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.189574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.189802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.189838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.190019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.190053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.190192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.190226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.190418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.190458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.555 [2024-11-04 16:37:30.190718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.555 qpair failed and we were unable to recover it. 00:26:03.555 [2024-11-04 16:37:30.190872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.190885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.191108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.191120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.191353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.191503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.191517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.191744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.191779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.192055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.192089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.192371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.192405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.192703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.192717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.192794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.192805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.193039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.193053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.193341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.193375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.193554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.193567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.193786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.193898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.193932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.194190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.194223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.194459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.194492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.194771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.194784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.194954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.194966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.195129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.195142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.195400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.195434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.195656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.195934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.195968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.196239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.196536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.196569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.196816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.196850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.197124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.197158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.197293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.197326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.197613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.197885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.197919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.198164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.198198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.198464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.198497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.198790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.198830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.198990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.199246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.199519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.199552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.199759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.200000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.200033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.200303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.556 [2024-11-04 16:37:30.200335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.556 qpair failed and we were unable to recover it. 00:26:03.556 [2024-11-04 16:37:30.200635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.200674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.200936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.200969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.201165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.201199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.201461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.201494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.201753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.201787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.202035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.202068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.202190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.202222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.202417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.202451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.202701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.202736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.203014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.203047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.203321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.203558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.203570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.203792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.203804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.203953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.203966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.204163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.204375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.204409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.204637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.204649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.204790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.204803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.205011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.205044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.205175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.205208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.205477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.205511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.205728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.205942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.205975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.206171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.206205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.206407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.206440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.206696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.206708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.206863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.206875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.207157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.207191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.207456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.207488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.207785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.207820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.208052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.208086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.208348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.208381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.208628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.208664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.208841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.208874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.209063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.209096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.209288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.209321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.557 [2024-11-04 16:37:30.209611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.557 [2024-11-04 16:37:30.209623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.557 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.209704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.209715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.209876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.209908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.210102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.210136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.210400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.210659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.210694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.210939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.210951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.211043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.211055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.211280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.211293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.211808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.211843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.212102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.212136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.212265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.212298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.212569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.212610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.212849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.212862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.213076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.213088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.213191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.213223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.213416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.213450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.213779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.213792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.213927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.213939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.214139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.214152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.214285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.214298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.214521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.214533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.214627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.214639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.214850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.214863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.215082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.215110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.215333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.215366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.215538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.215551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.215714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.215727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.215888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.215920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.216194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.216227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.216353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.216387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.216591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.216634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.216838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.216871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.217161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.217195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.217474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.217514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.217716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.558 [2024-11-04 16:37:30.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.558 [2024-11-04 16:37:30.217972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.558 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.218182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.218216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.218493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.218531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.218734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.218748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.218961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.219246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.219278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.219494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.219528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.219698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.219760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.220793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.220828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.221172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.221397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.221430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.221690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.221725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.222013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.222046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.222342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.222645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.222680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.222965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.222999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.223186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.223219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.223471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.223517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.223681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.223705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.223916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.223950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.224142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.224174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.224381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.224414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.224683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.224696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.224847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.224881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.225130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.225163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.225411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.225445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.225734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.225937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.225970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.226260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.559 [2024-11-04 16:37:30.226293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.559 qpair failed and we were unable to recover it. 00:26:03.559 [2024-11-04 16:37:30.226492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.226532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.226782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.226816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.227021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.227034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.227175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.227187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.227375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.227407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.227680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.227715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.227899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.227932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.228168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.228440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.228480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.228615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.228628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.228780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.228813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.229013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.229046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.229273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.229533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.229567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.229715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.229748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.230023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.230057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.230333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.230366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.230656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.230692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.230964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.230996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.231201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.231235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.231420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.231454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.231727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.231762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.231897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.231930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.232180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.232212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.232433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.232468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.232749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.232783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.232915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.232926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.233155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.233168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.233442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.233475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.233779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.233814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.234106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.234139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.234364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.234398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.234528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.234561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.234829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.234842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.235078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.235111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.235306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.235338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.235477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.235510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.235674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.235788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.235814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.236053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.236326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.560 [2024-11-04 16:37:30.236364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.560 qpair failed and we were unable to recover it. 00:26:03.560 [2024-11-04 16:37:30.236647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.236683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.236935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.236969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.237210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.237242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.237519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.237560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.237740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.237843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.237856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.237953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.237966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.238197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.238230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.238342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.238376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.238659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.238693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.238927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.238940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.239037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.239049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.239202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.239233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.239432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.239466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.239740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.239775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.240775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.240818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.241090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.241125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.241388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.241420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.241723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.241757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.241958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.241991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.242193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.242226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.242434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.242468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.242657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.242671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.242913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.242947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.243131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.243164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.243416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.243449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.243639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.243653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.243732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.243744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.243914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.243947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.244199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.244233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.244502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.244535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.244822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.244835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.245061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.245074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.245236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.245249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.245413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.245586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.561 [2024-11-04 16:37:30.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.561 qpair failed and we were unable to recover it. 00:26:03.561 [2024-11-04 16:37:30.245874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.245907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.246204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.246238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.246441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.246475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.246727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.246762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.246871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.246884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.247109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.247133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.247362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.247395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.247668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.247854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.247887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.248174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.248187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.248393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.248637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.248651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.248816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.248829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.249056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.249068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.249324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.249357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.249564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.249822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.249854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.250103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.250137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.250386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.250419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.250598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.250615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.250780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.250813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.251014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.251048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.251383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.251417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.251715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.251752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.251937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.251971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.252156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.252169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.252407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.252441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.252640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.252675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.252877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.252911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.253121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.253155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.253405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.253440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.253701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.253737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.254025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.254336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.254370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.254627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.254661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.254956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.254991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.255279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.255313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.255534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.255568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.255885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.255926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.562 [2024-11-04 16:37:30.256139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.562 [2024-11-04 16:37:30.256173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.562 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.256369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.256402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.256653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.256666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.256898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.256912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.257068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.257102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.257419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.257453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.257773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.257914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.257927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.258163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.258196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.258467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.258500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.258799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.258834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.259032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.259045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.259288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.259323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.259533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.259567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.259778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.259791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.259875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.260083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.260096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.260341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.260354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.260533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.260567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.260860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.260874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.261090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.261103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.261364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.261398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.261541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.261576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.261807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.261820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.262073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.262285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.262319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.262483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.262496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.262733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.262770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.262932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.262966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.263223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.263258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.263494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.263770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.263807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.263939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.263953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.264179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.264213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.264435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.264730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.264765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.264971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.265006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.265229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.265263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.265543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.265578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.265791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.265808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.563 qpair failed and we were unable to recover it. 00:26:03.563 [2024-11-04 16:37:30.265965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.563 [2024-11-04 16:37:30.265979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.266153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.266188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.266417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.266451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.266745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.266783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.266997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.267032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.267352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.267633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.267668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.267875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.267909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.268152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.268187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.268467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.268501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.268687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.268701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.268853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.268887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.269095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.269129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.269442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.269477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.269702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.269737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.270016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.270050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.270279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.270313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.270596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.270644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.270918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.271228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.271263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.271557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.271591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.271868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.271903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.272131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.272166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.272371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.272406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.272624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.272659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.272849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.272883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.273104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.273138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.273403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.273437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.273647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.273683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.273866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.273879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.274072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.274106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.274236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.274270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.274478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.274715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.564 [2024-11-04 16:37:30.274751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.564 qpair failed and we were unable to recover it. 00:26:03.564 [2024-11-04 16:37:30.274958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.274993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.275205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.275239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.275438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.275473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.275774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.275788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.276006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.276193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.276233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.276461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.276496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.276697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.276732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.276991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.277282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.277640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.277676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.277865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.277899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.278085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.278119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.278397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.278431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.278653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.278668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.278922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.278936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.279097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.279110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.279409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.279444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.279636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.279672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.279939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.279984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.280162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.280175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.280426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.280439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.280683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.280718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.281029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.281063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.281272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.281306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.281583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.281644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.281863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.281876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.282083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.282097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.282198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.282423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.282457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.282738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.282986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.283021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.283307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.283342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.283533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.283886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.283922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.284123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.284157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.284440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.284474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.284763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.284807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.285083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.285117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.285259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.285293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.285517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.285552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.565 [2024-11-04 16:37:30.285912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.565 qpair failed and we were unable to recover it. 00:26:03.565 [2024-11-04 16:37:30.286116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.286150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.286373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.286407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.286638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.286674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.286861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.286877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.287030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.287065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.287256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.287291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.287549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.287584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.287825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.287839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.287981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.287995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.288141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.288175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.288446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.288480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.288753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.288766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.288989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.289023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.289288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.289322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.289654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.289929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.289964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.290170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.290183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.290369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.290404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.290687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.290939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.290973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.291259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.291294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.291581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.291622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.291773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.291786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.292005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.292039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.292182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.292217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.292434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.292468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.292622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.292657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.292853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.292866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.293032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.293046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.293212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.293246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.293507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.293769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.294078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.294091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.294331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.294365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.294557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.294591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.294865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.294899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.295166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.295200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.295503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.295538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.295697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.295733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.296013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.296047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.296308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.296343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.296643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.296679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.296942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.296956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.297166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.297182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.566 [2024-11-04 16:37:30.297273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.566 [2024-11-04 16:37:30.297285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.566 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.297511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.297545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.297835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.298176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.298211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.298463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.298498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.298757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.299011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.299046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.299268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.299511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.299546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.299847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.300136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.300170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.300464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.300498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.300693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.300729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.301028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.301248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.301424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.301472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.301758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.301794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.301997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.302010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.302186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.302221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.302485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.302520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.302780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.302827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.303100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.303345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.303358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.303522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.303536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.303646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.303659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.303805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.303819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.304036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.304071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.304365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.304398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.304623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.304660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.304855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.304889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.305018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.305175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.305201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.305418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.305453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.305728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.305764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.306103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.306137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.306419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.306453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.306658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.306696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.306930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.306944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.307157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.307329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.307344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.307523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.307536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.307705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.307740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.308043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.308077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.308343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.567 [2024-11-04 16:37:30.308378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.567 qpair failed and we were unable to recover it. 00:26:03.567 [2024-11-04 16:37:30.308650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.308686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.308885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.308919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.309106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.309119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.309370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.309405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.309665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.309701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.309983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.309997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.310183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.310197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.310380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.310415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.310651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.310687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.310881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.310894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.311035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.311048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.311152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.311165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.311380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.311414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.311627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.311663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.311853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.311887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.312113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.312147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.312453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.312488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.312763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.312799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.313080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.313094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.313273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.313287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.313525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.313560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.313796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.313832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.313975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.314009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.314263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.314277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.314444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.314457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.314744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.314863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.314876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.315124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.315138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.315313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.315326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.315558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.315571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.315752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.315766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.315970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.315984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.568 [2024-11-04 16:37:30.316197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.568 [2024-11-04 16:37:30.316210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.568 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.316582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.316595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.316769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.317123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.317158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.317464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.317498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.317788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.317803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.317978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.317992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.569 [2024-11-04 16:37:30.318225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.569 [2024-11-04 16:37:30.318239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.569 qpair failed and we were unable to recover it. 00:26:03.853 [2024-11-04 16:37:30.318433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.853 [2024-11-04 16:37:30.318447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.853 qpair failed and we were unable to recover it. 00:26:03.853 [2024-11-04 16:37:30.318614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.853 [2024-11-04 16:37:30.318629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.318862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.318876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.319167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.319181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.319413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.319426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.319653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.319931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.320183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.320197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.320362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.320375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.320612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.320626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.320791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.320806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.320916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.321143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.321156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.321319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.321333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.321555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.321569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.321803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.321817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.322060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.322073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.322238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.322473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.322487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.322706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.322720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.322990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.323004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.323243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.323257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.323416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.323430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.323674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.323688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.323863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.323877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.324039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.324153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.324165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.324386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.324399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.324582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.324596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.324866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.325101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.325136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.325439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.325474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.325751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.325787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.325999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.326034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.326170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.326211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.326479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.326514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.326753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.326789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.327095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.327129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.854 [2024-11-04 16:37:30.327392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.854 [2024-11-04 16:37:30.327426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.854 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.327686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.328018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.328051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.328248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.328282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.328408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.328442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.328757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.328792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.329051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.329085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.329281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.329440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.329475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.329784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.329820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.330095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.330109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.330318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.330331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.330524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.330559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.330762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.330799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.330932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.330966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.331150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.331163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.331398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.331432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.331624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.331660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.331923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.331958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.332160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.332194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.332480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.332515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.332807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.332843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.333118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.333153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.333599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.333970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.334261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.334301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.334520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.334555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.334793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.334830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.335109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.335150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.335314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.335349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.335627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.335664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.335878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.335912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.336235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.336269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.336575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.336847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.337110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.337143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.337356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.337410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.337652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.855 [2024-11-04 16:37:30.337688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.855 qpair failed and we were unable to recover it. 00:26:03.855 [2024-11-04 16:37:30.337990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.338030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.338159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.338177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.338430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.338465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.338748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.338783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.339007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.339042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.339324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.339573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.339592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.339712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.339731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.339942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.339977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.340261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.340296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.340503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.340536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.340737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.340774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.341015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.341049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.341190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.341209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.341387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.341421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.341655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.341691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.341904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.341938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.342213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.342246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.342530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.342548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.342727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.342747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.342995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.343197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.343419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.343453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.343676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.343713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.343932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.343951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.344207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.344242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.344526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.344560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.344788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.344823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.345093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.345388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.345406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.345673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.345862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.345880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.346167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.346201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.346502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.346774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.346971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.347005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.347306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.347323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.347594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.347618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.856 qpair failed and we were unable to recover it. 00:26:03.856 [2024-11-04 16:37:30.347783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.856 [2024-11-04 16:37:30.347805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.347962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.348230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.348265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.348485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.348520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.348780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b9/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2970606 Killed "${NVMF_APP[@]}" "$@" 00:26:03.857 0 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.349001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.349035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.349294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.349329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:03.857 [2024-11-04 16:37:30.349599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.349652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.349934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:03.857 [2024-11-04 16:37:30.349971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.350229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.857 [2024-11-04 16:37:30.350489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.350527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.857 [2024-11-04 16:37:30.350836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.350880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:03.857 [2024-11-04 16:37:30.351081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.351118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.351387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.351406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.351593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.351630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.351830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.351849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.352029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.352047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.352169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.352190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.352468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.352488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.352776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.353055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.353089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.353365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.353384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.353557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.353575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.353679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.353697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.353944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.353964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.354094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.354114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.354287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.354321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.354637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.354686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.354874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.354893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.355082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.355116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.355360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.355395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.355612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.355645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.355793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.355826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.356079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.356099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.356261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.356279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.356533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.857 [2024-11-04 16:37:30.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.857 qpair failed and we were unable to recover it. 00:26:03.857 [2024-11-04 16:37:30.356700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.356746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.356900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.357167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.357186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.357354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.357372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.357562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.357596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.357943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.357978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.358169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.358191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2971328 00:26:03.858 [2024-11-04 16:37:30.358387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.358423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2971328 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:03.858 [2024-11-04 16:37:30.358705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.358745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.359019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.359040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2971328 ']' 00:26:03.858 [2024-11-04 16:37:30.359242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.359278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.858 [2024-11-04 16:37:30.359420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.359458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.359622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.359657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.858 [2024-11-04 16:37:30.359859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.359900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.858 [2024-11-04 16:37:30.360111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.858 [2024-11-04 16:37:30.360360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.360382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.360562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.360584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:03.858 [2024-11-04 16:37:30.360839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.360875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.361157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.361191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.361477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.361511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.361728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.361765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.362045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.362080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.362362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.362396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.362565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.362618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.362781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.362818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.363026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.363060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.363330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.363581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.858 [2024-11-04 16:37:30.363605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.858 qpair failed and we were unable to recover it. 00:26:03.858 [2024-11-04 16:37:30.363782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.363994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.364219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.364547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.364732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.364933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.365177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.365196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.365369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.365388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.365643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.365663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.365857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.365875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.366124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.366143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.366310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.366328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.366547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.366566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.366834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.366854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.367095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.367113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.367309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.367327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.367559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.367823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.367843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.368020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.368038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.368310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.368329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.368568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.368587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.368855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.368877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.369080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.369099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.369228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.369393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.369412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.369567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.369587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.369853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.369872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.370128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.370147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.370338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.370357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.370556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.370575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.370778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.370983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.371001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.371174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.371193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.371383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.371402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.371502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.371521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.371739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.371758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.371986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.372004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.859 qpair failed and we were unable to recover it. 00:26:03.859 [2024-11-04 16:37:30.372238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.859 [2024-11-04 16:37:30.372257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.372438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.372456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.372569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.372587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.372721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.372739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.372910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.372927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.373056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.373077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.373299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.373318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.373441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.373460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.373629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.373649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.373903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.373923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.374089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.374108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.374426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.374476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.374759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.374799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.375061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.375081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.375344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.375362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.375532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.375549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.375833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.375852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.376042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.376057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.376222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.376236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.376471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.376487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.376753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.376776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.377048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.377067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.377313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.377556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.377571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.377764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.377781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.377953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.377967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.378129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.378143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.378372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.378387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.378513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.378529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.378733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.378909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.378923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.379915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.379927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.380132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.380295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.860 [2024-11-04 16:37:30.380314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.860 qpair failed and we were unable to recover it. 00:26:03.860 [2024-11-04 16:37:30.380598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.380623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.380873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.380888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.381115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.381276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.381290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.381460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.381473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.381625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.381644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.381896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.382055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.382070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.382218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.382232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.382458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.382476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.382646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.382663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.382822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.382837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.383974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.383989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.384095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.384108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.384272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.384300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.384459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.384474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.384743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.384987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.385002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.385220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.385234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.385396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.385412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.385575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.385589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.385848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.385870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.386835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.386854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.861 [2024-11-04 16:37:30.387932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.861 [2024-11-04 16:37:30.387954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.861 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.388140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.388339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.388523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.388654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.388827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.388999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.389871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.389891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.390887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.390905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.391077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.391096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.391330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.391349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.391527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.391546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.391803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.391823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.392963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.392981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.393171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.393190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.393389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.393408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.393493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.393510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.393680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.393700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.393819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.393838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.394069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.394088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.394184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.394202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.394387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.394405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.862 qpair failed and we were unable to recover it. 00:26:03.862 [2024-11-04 16:37:30.394593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.862 [2024-11-04 16:37:30.394617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.394775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.394794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.394900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.394919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.395957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.395973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.396963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.396980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.397141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.397326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.397343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.397449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.397467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.397634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.397655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.397891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.398921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.398938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.863 qpair failed and we were unable to recover it. 00:26:03.863 [2024-11-04 16:37:30.399673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.863 [2024-11-04 16:37:30.399691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.399869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.399887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.400839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.400858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.401792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.401810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.402933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.402951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.403110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.403129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.403308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.403326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.403582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.403623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.403757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.403791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.403903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.403918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.404921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.404997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405100] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:26:03.864 [2024-11-04 16:37:30.405139] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.864 [2024-11-04 16:37:30.405170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.864 [2024-11-04 16:37:30.405683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.864 qpair failed and we were unable to recover it. 00:26:03.864 [2024-11-04 16:37:30.405919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.405930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.406034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.406045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.406199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.406210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.409613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.409640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.409898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.409916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.410137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.410152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.410363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.410376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.410528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.410540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.410690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.410704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.410915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.410928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.411849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.411861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.412903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.412916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.413841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.413988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.414941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.414953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.865 [2024-11-04 16:37:30.415107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.865 qpair failed and we were unable to recover it. 00:26:03.865 [2024-11-04 16:37:30.415191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.415908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.415998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.416934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.416948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.417977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.418854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.418866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.419910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.419923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.866 [2024-11-04 16:37:30.420085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.866 [2024-11-04 16:37:30.420097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.866 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.420864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.420875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.421910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.421925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.422842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.422856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.423949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.424094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.424106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.424249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.424261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.424473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.424485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.424633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.424646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.424863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.424878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.425114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.867 [2024-11-04 16:37:30.425131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.867 qpair failed and we were unable to recover it. 00:26:03.867 [2024-11-04 16:37:30.425346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.425362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.425466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.425483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.425648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.425666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.425754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.425768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.425864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.425878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.426852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.427944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.427957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.428917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.428998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.429920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.429934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.430027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.430041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.430129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.430142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.430282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.430295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.430383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.868 qpair failed and we were unable to recover it. 00:26:03.868 [2024-11-04 16:37:30.430485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.868 [2024-11-04 16:37:30.430497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.430671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.430759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.430772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.430932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.430945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.431951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.431964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.432929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.432944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.433896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.433997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.434189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.434306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.434462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.435941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.435954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.436046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.436067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.436164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.869 [2024-11-04 16:37:30.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.869 qpair failed and we were unable to recover it. 00:26:03.869 [2024-11-04 16:37:30.436333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.436350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.436517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.436536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.436661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.436681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.436857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.437954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.437966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.438950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.438962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.439907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.439920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.440968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.440981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.441056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.441068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.441138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.870 [2024-11-04 16:37:30.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.870 [2024-11-04 16:37:30.441253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.870 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.441321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.441333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.441404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.441552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.441564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.441705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.441718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.441855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.441868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.442887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.443954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.443967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.444924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.444995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.871 [2024-11-04 16:37:30.445623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.871 qpair failed and we were unable to recover it. 00:26:03.871 [2024-11-04 16:37:30.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.445772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.445915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.445929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.446981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.446994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.447856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.447869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.448923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.448936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.872 [2024-11-04 16:37:30.449935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.872 [2024-11-04 16:37:30.449946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.872 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.450925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.450940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.451833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.451989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.452982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.453928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.453941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.873 [2024-11-04 16:37:30.454662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.873 qpair failed and we were unable to recover it. 00:26:03.873 [2024-11-04 16:37:30.454748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.454760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.454987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.454999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.455917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.455993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.456842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.457976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.457987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.458964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.458976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.459063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.459074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.459154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.874 [2024-11-04 16:37:30.459219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.874 [2024-11-04 16:37:30.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.874 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.459938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.460972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.460983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.461959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.461971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.462899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.462990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.463001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.463088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.463101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.463236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.463248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.463320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.875 [2024-11-04 16:37:30.463332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.875 qpair failed and we were unable to recover it. 00:26:03.875 [2024-11-04 16:37:30.463422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.463434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.463517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.463530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.463635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.463796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.463808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.463906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.463919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.463997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.464898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.464910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.465931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.465944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.466952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.466964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.467102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.467114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.467214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.467225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.467320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.467331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.467393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.467404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.876 qpair failed and we were unable to recover it. 00:26:03.876 [2024-11-04 16:37:30.467560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.876 [2024-11-04 16:37:30.467573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.467661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.467674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.467801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.467813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.467902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.467914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.467993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.468988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.469922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.469935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.470979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.470991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.877 [2024-11-04 16:37:30.472015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.877 [2024-11-04 16:37:30.472027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.877 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.472903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.472990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.473958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.473971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.474959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.474981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.475956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.475969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.476058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.476072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.476229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.476245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.476388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.476403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.476491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.878 [2024-11-04 16:37:30.476506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.878 qpair failed and we were unable to recover it. 00:26:03.878 [2024-11-04 16:37:30.476666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.476682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.476765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.476779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.476861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.476875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.476956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.476971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.477962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.477980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.478982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.478998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.479978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.479996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.480948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.480966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.481053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.481070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.481242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.481259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.481399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.481493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.481509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.879 qpair failed and we were unable to recover it. 00:26:03.879 [2024-11-04 16:37:30.481589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.879 [2024-11-04 16:37:30.481615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.481777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.481792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.481877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.481894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.481969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.481987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.482937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.482949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.483929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.483940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.484960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.485945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.486018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.486030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.486170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.880 [2024-11-04 16:37:30.486181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.880 qpair failed and we were unable to recover it. 00:26:03.880 [2024-11-04 16:37:30.486294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.486962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.486974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.487953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.487965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.488962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.881 [2024-11-04 16:37:30.489896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.881 [2024-11-04 16:37:30.489908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.881 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.489987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.489999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.490934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.490945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.491844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.492977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.492989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.493820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.493833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.494006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.494020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.494170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.494182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.494267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.882 [2024-11-04 16:37:30.494278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.882 qpair failed and we were unable to recover it. 00:26:03.882 [2024-11-04 16:37:30.494357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.494517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.494606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.494699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.494796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.494966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.494978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.495963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.495976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.496914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.496933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.497964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.497976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.498120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.498199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.498212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.498285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.498297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.498376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.883 [2024-11-04 16:37:30.498388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.883 qpair failed and we were unable to recover it. 00:26:03.883 [2024-11-04 16:37:30.498529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.498542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.498617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.498630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.498780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.498793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.498860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.498873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.498936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.498949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.884 [2024-11-04 16:37:30.499095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.499887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.499901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.500890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.501920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.501933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.502860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.502873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.503007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.884 [2024-11-04 16:37:30.503019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.884 qpair failed and we were unable to recover it. 00:26:03.884 [2024-11-04 16:37:30.503097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.503919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.503931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.504920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.504933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.505960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.505972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.885 [2024-11-04 16:37:30.506884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.885 qpair failed and we were unable to recover it. 00:26:03.885 [2024-11-04 16:37:30.506964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.507950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.507963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.508950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.508962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.509914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.509926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.510975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.510988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.511063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.886 [2024-11-04 16:37:30.511076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.886 qpair failed and we were unable to recover it. 00:26:03.886 [2024-11-04 16:37:30.511149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.511980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.511992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.512914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.512927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.513931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.513943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.514817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.514990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.887 qpair failed and we were unable to recover it. 00:26:03.887 [2024-11-04 16:37:30.515615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.887 [2024-11-04 16:37:30.515629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.515714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.515727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.515789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.515802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.515860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.515872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.515951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.515964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.516975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.516987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.517916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.517993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.518908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.518926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.519839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.519988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.520003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.520069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.520082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.520168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.520180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.520265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.520278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.520496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.888 [2024-11-04 16:37:30.520509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.888 qpair failed and we were unable to recover it. 00:26:03.888 [2024-11-04 16:37:30.520676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.520689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.520767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.520780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.520964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.520977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.521963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.521975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.522972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.522984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.523910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.523923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.524915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.889 [2024-11-04 16:37:30.525536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.889 qpair failed and we were unable to recover it. 00:26:03.889 [2024-11-04 16:37:30.525611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.525625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.525877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.525891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.525980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.525992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.526912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.526984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.527914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.527926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.528886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.528901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.529897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.529909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.530052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.530064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.530210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.530222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.530368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.530383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.530545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.530558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.890 [2024-11-04 16:37:30.530644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.890 [2024-11-04 16:37:30.530656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.890 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.530741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.530753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.530831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.530843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.530978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.530990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.531838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.531852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.532984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.532997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.533859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.533993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.534934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.534947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.535100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.535418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.535566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.891 [2024-11-04 16:37:30.535660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.891 qpair failed and we were unable to recover it. 00:26:03.891 [2024-11-04 16:37:30.535727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.535739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.535833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.535847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.535919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.535932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.536972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.536986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.537928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.537941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.538955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.539897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.539914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.892 qpair failed and we were unable to recover it. 00:26:03.892 [2024-11-04 16:37:30.540939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.892 [2024-11-04 16:37:30.540958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.541895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.893 [2024-11-04 16:37:30.541922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.893 [2024-11-04 16:37:30.541931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.893 [2024-11-04 16:37:30.541939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.893 [2024-11-04 16:37:30.541945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.893 [2024-11-04 16:37:30.541977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.541989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.542851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.542864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:03.893 [2024-11-04 16:37:30.543711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.543644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:03.893 [2024-11-04 16:37:30.543757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:03.893 [2024-11-04 16:37:30.543757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:03.893 [2024-11-04 16:37:30.543902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.543914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.544911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.544925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.545948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.545962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.546114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.546127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.546211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.546223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.546309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.893 [2024-11-04 16:37:30.546322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.893 qpair failed and we were unable to recover it. 00:26:03.893 [2024-11-04 16:37:30.546390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.546402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.546468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.546482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.546562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.546574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.546769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.546785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.546919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.546932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.547975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.547988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.548956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.549833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.549850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.550957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.551102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.551114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.551263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.551275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.551345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.551358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.551455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.551468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.894 qpair failed and we were unable to recover it. 00:26:03.894 [2024-11-04 16:37:30.551551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.894 [2024-11-04 16:37:30.551565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.551635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.551649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.551855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.551869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.551945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.551959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.552840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.553948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.553975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.554927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.554941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.555994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.556822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.556835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.557033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.557046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.557181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.557194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.557266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.557279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.557358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.895 [2024-11-04 16:37:30.557372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.895 qpair failed and we were unable to recover it. 00:26:03.895 [2024-11-04 16:37:30.557486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.557499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.557707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.557719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.557860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.557873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.558793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.558806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.559851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.559993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.560953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.560965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.561848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.561988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.562847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.562860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.896 [2024-11-04 16:37:30.563874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.896 [2024-11-04 16:37:30.563886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.896 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.563965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.563979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.564938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.564953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.565960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.565973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.566986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.566999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.567898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.567910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.568897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.569042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.569056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.569153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.569167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.569256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.569270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.897 qpair failed and we were unable to recover it. 00:26:03.897 [2024-11-04 16:37:30.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.897 [2024-11-04 16:37:30.569553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.569635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.569648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.569784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.569817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.570868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.570882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.571945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.571958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.572851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.572866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.573859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.573872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.574023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.574036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.574191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.574204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.574381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.898 [2024-11-04 16:37:30.574395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.898 qpair failed and we were unable to recover it. 00:26:03.898 [2024-11-04 16:37:30.574528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.574543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.574644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.574657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.574775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.574944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.575839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.575853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.576805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.576819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.577062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.577076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.577224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.577236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.577457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.577470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.577648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.577663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.577832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.577845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.578979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.578994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.579233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.579247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.579380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.579393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.579524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.579537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.579765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.579778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.579894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.579907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.580132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.580145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.580249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.580262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.580335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.580348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.580517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.899 [2024-11-04 16:37:30.580529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.899 qpair failed and we were unable to recover it. 00:26:03.899 [2024-11-04 16:37:30.580788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.580802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.580879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.580891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.580955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.580967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.581925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.581938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.582072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.582085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.582220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.582451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.582464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.582698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.582847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.582862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.583899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.583912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.584925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.584939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.585927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.585940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-11-04 16:37:30.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-11-04 16:37:30.586041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.586856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.587856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.587869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.588866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.588879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.589937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.589949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.590977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.590992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.591053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.591066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.591267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.591280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.591357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-11-04 16:37:30.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-11-04 16:37:30.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.591582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.591749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.591763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.591910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.591922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.592970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.592983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.593915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.593928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.594938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.594951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.595185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.595287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.595430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-11-04 16:37:30.595529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-11-04 16:37:30.595622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.595635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.595711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.595725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.595798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.595812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.595880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.595979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.595992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.596949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.596962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.597954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.597966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.598985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.599984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.599997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.600154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.600167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.600315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.600328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.600463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.600476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.600537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.600550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-11-04 16:37:30.600689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-11-04 16:37:30.600702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.600783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.600796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.600886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.600899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.601882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.601894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.602942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.602954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.603984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.604845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.604991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-11-04 16:37:30.605002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-11-04 16:37:30.605077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.605940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.606828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.606987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.607916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.607934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.608888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.608901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.609051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.609146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.609307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.609393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-11-04 16:37:30.609548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-11-04 16:37:30.609641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.609654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.609734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.609746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.609822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.609834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.609976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.610947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.610959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.611881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.612965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.612977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-11-04 16:37:30.613860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-11-04 16:37:30.613872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.613996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.614932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.614944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.615898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.615910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.616858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.616999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.617838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.617852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.618013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.618025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.618113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.618125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-11-04 16:37:30.618191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-11-04 16:37:30.618203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.618947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.618960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.619957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.619970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.620924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.620936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.621925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.621990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.622003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.622068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.622080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.622159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-11-04 16:37:30.622172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-11-04 16:37:30.622268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.622966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.622979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.623873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.623886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.624904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.624916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.625971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-11-04 16:37:30.625982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-11-04 16:37:30.626149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.626936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.626998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.627963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.628857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.628869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.629942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-11-04 16:37:30.630853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-11-04 16:37:30.630865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.631938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.631949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.632913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.632925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.633150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.633162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.633360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.633373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.633501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.633514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.633672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.633684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.633835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.633849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.634915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.634928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.635119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.635132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.635341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.635353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.635494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.635505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.635689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.635701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.635851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.635865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-11-04 16:37:30.636566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-11-04 16:37:30.636582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.636723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.636736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.636866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.636878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.636956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.636967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.637945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.638874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.638885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.639921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.639934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.640918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-11-04 16:37:30.640989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-11-04 16:37:30.641001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.641980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.641993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.643880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.643904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.644916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-11-04 16:37:30.644930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-11-04 16:37:30.645016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.645929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.645998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.646862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.646873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.647841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.647992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.914 [2024-11-04 16:37:30.648168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:03.914 [2024-11-04 16:37:30.648871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.648965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.648977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.649144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.649159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.914 [2024-11-04 16:37:30.649302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.649316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-11-04 16:37:30.649399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-11-04 16:37:30.649412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.649556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.649569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.915 [2024-11-04 16:37:30.649768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.649782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.649870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.649884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:03.915 [2024-11-04 16:37:30.650098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.650917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.650929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.651939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.651952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.652973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.652985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.653079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.653092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:03.915 [2024-11-04 16:37:30.653156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.915 [2024-11-04 16:37:30.653168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:03.915 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.653300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.653314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.653569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.653608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.653765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.653785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.653948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.653967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.654139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.654156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.654323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-11-04 16:37:30.654334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-11-04 16:37:30.654427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.654440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.654514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.654526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.654676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.654690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.654845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.654857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.655809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.655990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.656890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.656989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.657916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.657929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.658953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.658966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.659122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.659134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.659218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.659231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-11-04 16:37:30.659321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-11-04 16:37:30.659334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.659401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.659494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.659506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.659653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.659666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.659757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.659771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.659907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.660856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.661938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.661950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.662952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.662964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.663107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.663118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.663359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-11-04 16:37:30.663372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-11-04 16:37:30.663452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.663465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.663546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.663559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.663651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.663893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.663979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.664992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.665839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.665851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.666951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.666963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.667113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.667126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.667322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.667334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.667419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.667431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-11-04 16:37:30.667498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-11-04 16:37:30.667511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.667609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.667622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.667701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.667714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.667828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.667958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.667970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.668945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.668957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.669931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.669944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.670931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.671066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.671079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.671145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.671158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.671224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-11-04 16:37:30.671236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-11-04 16:37:30.671304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.671964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.671978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.672982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.672994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.673971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.673982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.674048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.674059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.674119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.674131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-11-04 16:37:30.674190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-11-04 16:37:30.674202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.674922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.674991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.675903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.675932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.676912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.676925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-11-04 16:37:30.677677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-11-04 16:37:30.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.677773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.677786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.677854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.677866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.678924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.678936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.679975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.679986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-11-04 16:37:30.680765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-11-04 16:37:30.680830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.680842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.680906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.680919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.680986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.680998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.681978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.681990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.682987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.682998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.683968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.683980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-11-04 16:37:30.684865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-11-04 16:37:30.684877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.193 [2024-11-04 16:37:30.685770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.685932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.685944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.193 [2024-11-04 16:37:30.686171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.193 [2024-11-04 16:37:30.686463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.193 [2024-11-04 16:37:30.686771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.686861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.686991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.687934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.687948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-11-04 16:37:30.688756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-11-04 16:37:30.688767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.688853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.688865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.689903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.689995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.690951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.690962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.691880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.692025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.692036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.692167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.692179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.692275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-11-04 16:37:30.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-11-04 16:37:30.692347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.692430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.692521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.692752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.692829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.692840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.693956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.693968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.694974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.694987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.695124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.695136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.695336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.695348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.695587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.695726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.695739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.695936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.695947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.696095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.696108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.696191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.696203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.696369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.696381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.696583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-11-04 16:37:30.696595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-11-04 16:37:30.696691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.696702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.696830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.696841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.696987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.696998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.697132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.697143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.697370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.697382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.697610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.697622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.697791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.697806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.697955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.697967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.698117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.698129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.698370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.698382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.698548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.698559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.698783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.698796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.698946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.698957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.699795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.699807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.700831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.700843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.701946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.701970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.702138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.702156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.702442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.702459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-11-04 16:37:30.702737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-11-04 16:37:30.702756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.702837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.702854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.703841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe03ba0 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.704976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.704988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.705188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.705200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.705417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.705429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.705655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.705667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.705898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.705909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.706929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.706940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.707979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.707991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.708167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.708188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.708402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.708415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.708573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.708585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.708748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-11-04 16:37:30.708762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-11-04 16:37:30.708898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.708910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.709044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.709219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.709231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.709446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.709459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.709679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.709692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.709915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.709927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.710807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.710819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.711920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.711933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.712954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.712967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.713150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.713164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.713392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.713407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.713609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.713622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.713914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.713955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.714083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.714271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.714289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.714537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.714555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.714740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.714758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.714949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.714967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.715215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.715232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.715382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.715399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-11-04 16:37:30.715599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-11-04 16:37:30.715622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.715878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.715892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.715980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.715991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.716238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.716250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.716324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.716335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.716530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.716543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.716694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.716706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.716929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.716941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 Malloc0 00:26:04.199 [2024-11-04 16:37:30.717137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.717149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.717369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.717382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.199 [2024-11-04 16:37:30.717606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:04.199 [2024-11-04 16:37:30.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.717774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.199 [2024-11-04 16:37:30.718002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.199 [2024-11-04 16:37:30.718014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.718227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.718239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.718482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.718494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.718660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.718672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.718832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.718978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.718990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.719139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.719151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.719372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.719383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.719579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.719590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.719776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.720058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.720078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.720191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.720208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.720458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.720574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.720736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.199 [2024-11-04 16:37:30.720803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.720817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.721066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.721087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.721200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-11-04 16:37:30.721216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-11-04 16:37:30.721427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.721444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a0000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.721643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.721656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.721804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.721816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.721959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.721970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.722110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.722122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.722253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.722265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.722410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.722421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.722578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.722794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.722807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.723941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.723954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.724175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.724188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.724332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.724553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.724565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.724716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.724729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.724956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.724968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.725060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.725071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.725282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.725294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.725430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.725441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.725648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.725659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.725822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.725834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.726022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.726033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.726190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.726201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.726341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.726352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.726565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.726790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.726802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.727951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.727962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-11-04 16:37:30.728048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-11-04 16:37:30.728060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.728777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.728788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.729012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.729024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.201 [2024-11-04 16:37:30.729124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.729136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.201 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.201 [2024-11-04 16:37:30.729374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.729387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.201 [2024-11-04 16:37:30.729628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.729641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.729878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.729890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.730040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.730052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.730190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.730202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.730281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.730295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.730502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.730514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.730752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.730764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.731947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.731959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.732164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.732176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.732341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.732353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.732560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.732571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.732679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.732692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.732841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.732853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.733116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.733127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.733375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.733387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.733456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.733468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.733671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.733684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.733939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.733951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.734121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.734132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-11-04 16:37:30.734360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-11-04 16:37:30.734372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.734598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.734615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.734772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.735034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.735046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.735273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.735284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.735431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.735443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.735577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.735589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.735807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.735829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.736093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.736110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.736345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.736362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.736584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.736607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.736846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.736863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.202 [2024-11-04 16:37:30.737108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.737126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.202 [2024-11-04 16:37:30.737355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.202 [2024-11-04 16:37:30.737373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.202 [2024-11-04 16:37:30.737549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.737567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.737728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.737746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.737909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.737926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.738161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.738178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.738384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.738401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64ac000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.738665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.738679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.738878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.739821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.739832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.740040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.740052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.740301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.740312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.740475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.740487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.740729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.740741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.740873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.740885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.741031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.741043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.741199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.741211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.741343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.741355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-11-04 16:37:30.741524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-11-04 16:37:30.741536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.741744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.741874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.741886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.742100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.742111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.742257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.742269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.742529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.742540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.742618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.742630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.742796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.742807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.743028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.743040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.743266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.743278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.743494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.743507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.743773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.743785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.743912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.743924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.744933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.203 [2024-11-04 16:37:30.745125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.745137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.203 [2024-11-04 16:37:30.745355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.745367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.203 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.203 [2024-11-04 16:37:30.745584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.745596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.745756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.745767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.745950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.746103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.746115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.746344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.746355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.746565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.746576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.746834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.746846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-11-04 16:37:30.747079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-11-04 16:37:30.747300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.747312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.747541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.747639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.747651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.747846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.747858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.747999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.748011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.748228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.748240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.748393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.748404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.748624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.748636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.748765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-11-04 16:37:30.748777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64a4000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.749178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.204 [2024-11-04 16:37:30.751395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.751471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.751489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.751497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.751504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.751524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.204 16:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2970641 00:26:04.204 [2024-11-04 16:37:30.761318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.761381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.761396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.761405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.761411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.761428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.771315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.771375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.771393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.771401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.771407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.771423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.781355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.781416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.781430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.781437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.781443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.781459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.791324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.791393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.791407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.791414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.791421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.791436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.801307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.801364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.801378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.801385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.801391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.801406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.811324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.811375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.811388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.811395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.811401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.811419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.821401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.821460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.821474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.204 [2024-11-04 16:37:30.821482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.204 [2024-11-04 16:37:30.821488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.204 [2024-11-04 16:37:30.821505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-11-04 16:37:30.831414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.204 [2024-11-04 16:37:30.831469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.204 [2024-11-04 16:37:30.831483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.831491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.831497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.831513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.841434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.841525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.841539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.841546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.841552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.841569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.851462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.851526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.851540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.851547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.851553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.851569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.861487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.861543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.861557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.861564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.861570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.861586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.871551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.871612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.871626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.871634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.871640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.871656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.881528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.881584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.881598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.881611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.881617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.881633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.891555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.891617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.891631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.891638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.891645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.891660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.901593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.901659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.901676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.901684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.901691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.901707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.911629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.911694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.911708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.911715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.911721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.911737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.921680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.921738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.921752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.921760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.921766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.921781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.931718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.931774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.931788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.931795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.931802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.931817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.941737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.941797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.941811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.941818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.941828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.941843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.951740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.951805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.205 [2024-11-04 16:37:30.951818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.205 [2024-11-04 16:37:30.951826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.205 [2024-11-04 16:37:30.951833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.205 [2024-11-04 16:37:30.951848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-11-04 16:37:30.961770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.205 [2024-11-04 16:37:30.961827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.206 [2024-11-04 16:37:30.961841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.206 [2024-11-04 16:37:30.961848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.206 [2024-11-04 16:37:30.961855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.206 [2024-11-04 16:37:30.961870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-11-04 16:37:30.971784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.206 [2024-11-04 16:37:30.971835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.206 [2024-11-04 16:37:30.971849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.206 [2024-11-04 16:37:30.971856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.206 [2024-11-04 16:37:30.971862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.206 [2024-11-04 16:37:30.971877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-11-04 16:37:30.981851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.206 [2024-11-04 16:37:30.981926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.206 [2024-11-04 16:37:30.981940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.206 [2024-11-04 16:37:30.981947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.206 [2024-11-04 16:37:30.981953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.206 [2024-11-04 16:37:30.981968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-11-04 16:37:30.991890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.206 [2024-11-04 16:37:30.991952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.206 [2024-11-04 16:37:30.991966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.206 [2024-11-04 16:37:30.991973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.206 [2024-11-04 16:37:30.991980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.206 [2024-11-04 16:37:30.991995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.001881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.001940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.001955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.001962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.001969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.001985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.011897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.011949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.011962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.011970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.011976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.011991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.021930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.021988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.022002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.022009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.022015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.022031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.031962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.032022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.032039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.032046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.032052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.032068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.041988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.042048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.042062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.042070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.042076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.042092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.052003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.052063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.052076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.052084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.052090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.052105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.062086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.062146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.062160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.062167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.062173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.062188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.072088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.072147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.072161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.072171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.072178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.072194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.082098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.082157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.082171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.082178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.082184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.082200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.092126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.465 [2024-11-04 16:37:31.092182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.465 [2024-11-04 16:37:31.092196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.465 [2024-11-04 16:37:31.092204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.465 [2024-11-04 16:37:31.092211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.465 [2024-11-04 16:37:31.092226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.465 qpair failed and we were unable to recover it. 00:26:04.465 [2024-11-04 16:37:31.102165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.102223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.102237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.102244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.102251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.102266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.112187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.112246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.112260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.112267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.112274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.112289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.122212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.122265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.122279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.122286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.122293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.122308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.132236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.132290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.132304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.132311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.132317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.132332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.142214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.142271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.142284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.142291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.142298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.142312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.152253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.152330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.152345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.152351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.152357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.152372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.162325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.162383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.162397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.162404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.162411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.162425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.172384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.172441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.172455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.172463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.172468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.172484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.182380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.182436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.182450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.182457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.182463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.182478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.192389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.192451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.192464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.192471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.192478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.192492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.202413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.202466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.202481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.202491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.202497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.202512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.466 qpair failed and we were unable to recover it. 00:26:04.466 [2024-11-04 16:37:31.212456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.466 [2024-11-04 16:37:31.212515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.466 [2024-11-04 16:37:31.212529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.466 [2024-11-04 16:37:31.212536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.466 [2024-11-04 16:37:31.212542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.466 [2024-11-04 16:37:31.212557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.222497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.222553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.222567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.222574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.222580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.222595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.232520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.232621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.232636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.232643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.232649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.232665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.242477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.242537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.242552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.242559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.242566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.242587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.252573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.252631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.252646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.252653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.252660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.252675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.262617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.262675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.262689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.262696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.262702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.262717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.272665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.272723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.272736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.272743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.272750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.272765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.467 [2024-11-04 16:37:31.282719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.467 [2024-11-04 16:37:31.282775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.467 [2024-11-04 16:37:31.282788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.467 [2024-11-04 16:37:31.282795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.467 [2024-11-04 16:37:31.282801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.467 [2024-11-04 16:37:31.282816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.467 qpair failed and we were unable to recover it. 00:26:04.726 [2024-11-04 16:37:31.292670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.726 [2024-11-04 16:37:31.292728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.726 [2024-11-04 16:37:31.292742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.292749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.292756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.292770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.302736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.302797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.302832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.302840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.302847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.302880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.312734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.312795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.312809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.312817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.312823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.312838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.322717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.322803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.322817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.322824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.322830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.322845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.332742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.332797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.332814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.332822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.332828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.332843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.342838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.342911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.342925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.342932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.342938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.342953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.352837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.352890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.352905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.352912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.352918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.352933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.362893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.362968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.362982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.362989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.362995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.363010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.372929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.373003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.373018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.373025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.373036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.373052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.382879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.382937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.382950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.382958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.382966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.382981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.392899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.392954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.392969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.392975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.392981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.392997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.402976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.403030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.403043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.403050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.403057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.727 [2024-11-04 16:37:31.403072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.727 qpair failed and we were unable to recover it. 00:26:04.727 [2024-11-04 16:37:31.413018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.727 [2024-11-04 16:37:31.413075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.727 [2024-11-04 16:37:31.413089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.727 [2024-11-04 16:37:31.413097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.727 [2024-11-04 16:37:31.413103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.413119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.423087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.423146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.423160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.423167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.423174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.423189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.433035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.433108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.433122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.433129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.433135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.433151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.443111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.443161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.443174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.443181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.443187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.443202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.453124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.453188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.453202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.453210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.453216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.453231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.463161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.463218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.463234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.463242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.463248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.463263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.473192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.473250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.473264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.473271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.473277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.473291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.483154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.483208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.483222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.483229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.483235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.483250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.493279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.493334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.493348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.493356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.493362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.493377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.503326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.503386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.503400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.503408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.503417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.503432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.513328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.513384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.513398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.513406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.513412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.513428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.523343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.523398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.523413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.523421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.523429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.523445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.533369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.533419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.533434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.533441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.533448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.728 [2024-11-04 16:37:31.533464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.728 qpair failed and we were unable to recover it. 00:26:04.728 [2024-11-04 16:37:31.543354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.728 [2024-11-04 16:37:31.543428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.728 [2024-11-04 16:37:31.543443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.728 [2024-11-04 16:37:31.543450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.728 [2024-11-04 16:37:31.543456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.729 [2024-11-04 16:37:31.543471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.729 qpair failed and we were unable to recover it. 00:26:04.988 [2024-11-04 16:37:31.553540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.988 [2024-11-04 16:37:31.553610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.988 [2024-11-04 16:37:31.553625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.988 [2024-11-04 16:37:31.553633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.988 [2024-11-04 16:37:31.553639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.988 [2024-11-04 16:37:31.553654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.988 qpair failed and we were unable to recover it. 00:26:04.988 [2024-11-04 16:37:31.563488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.988 [2024-11-04 16:37:31.563544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.988 [2024-11-04 16:37:31.563558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.988 [2024-11-04 16:37:31.563565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.988 [2024-11-04 16:37:31.563571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.988 [2024-11-04 16:37:31.563586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.988 qpair failed and we were unable to recover it. 00:26:04.988 [2024-11-04 16:37:31.573475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.988 [2024-11-04 16:37:31.573529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.988 [2024-11-04 16:37:31.573542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.988 [2024-11-04 16:37:31.573550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.988 [2024-11-04 16:37:31.573557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.988 [2024-11-04 16:37:31.573571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.988 qpair failed and we were unable to recover it. 00:26:04.988 [2024-11-04 16:37:31.583561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.988 [2024-11-04 16:37:31.583640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.583654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.583661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.583667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.583682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.593538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.593594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.593616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.593623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.593629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.593645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.603558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.603618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.603632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.603639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.603646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.603661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.613614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.613673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.613687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.613695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.613701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.613716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.623611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.623671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.623685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.623692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.623698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.623713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.633695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.633800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.633813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.633824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.633831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.633846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.643705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.643811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.643825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.643833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.643839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.643855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.653686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.653742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.653757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.653764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.653771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.653786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.663728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.663784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.663797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.663804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.663810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.663825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.673757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.673816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.673830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.673838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.673845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.673860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.683736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.683791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.683806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.683814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.683820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.683835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.693806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.693860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.989 [2024-11-04 16:37:31.693874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.989 [2024-11-04 16:37:31.693882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.989 [2024-11-04 16:37:31.693888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.989 [2024-11-04 16:37:31.693905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.989 qpair failed and we were unable to recover it. 00:26:04.989 [2024-11-04 16:37:31.703785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.989 [2024-11-04 16:37:31.703839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.703852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.703860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.703867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.703882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.713875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.713934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.713948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.713955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.713961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.713976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.723910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.723984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.723999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.724006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.724012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.724028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.733923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.733976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.733989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.733996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.734003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.734018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.743989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.744047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.744061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.744068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.744075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.744090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.754045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.754106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.754120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.754128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.754134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.754149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.764006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.764064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.764077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.764088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.764094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.764109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.774031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.774090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.774104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.774111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.774117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.774132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.784068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.784125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.784139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.784146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.784152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.784167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.794096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.794150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.794164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.794171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.794178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.794193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:04.990 [2024-11-04 16:37:31.804174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.990 [2024-11-04 16:37:31.804239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.990 [2024-11-04 16:37:31.804252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.990 [2024-11-04 16:37:31.804260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.990 [2024-11-04 16:37:31.804266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:04.990 [2024-11-04 16:37:31.804284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.990 qpair failed and we were unable to recover it. 00:26:05.250 [2024-11-04 16:37:31.814151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.250 [2024-11-04 16:37:31.814206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.250 [2024-11-04 16:37:31.814219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.250 [2024-11-04 16:37:31.814226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.250 [2024-11-04 16:37:31.814233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.250 [2024-11-04 16:37:31.814247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.250 qpair failed and we were unable to recover it. 00:26:05.250 [2024-11-04 16:37:31.824192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.250 [2024-11-04 16:37:31.824248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.250 [2024-11-04 16:37:31.824262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.250 [2024-11-04 16:37:31.824269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.250 [2024-11-04 16:37:31.824276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.250 [2024-11-04 16:37:31.824291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.834226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.834283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.834296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.834303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.834310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.834325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.844245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.844304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.844317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.844325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.844331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.844346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.854290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.854344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.854359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.854366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.854373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.854389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.864309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.864391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.864405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.864413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.864419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.864433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.874383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.874435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.874449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.874456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.874462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.874478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.884387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.884438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.884452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.884459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.884466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.884481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.894388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.894440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.894457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.894463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.894470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.894485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.904417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.904472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.904485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.904492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.904499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.904514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.914498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.914553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.914567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.914574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.914580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.914595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.924472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.924523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.924536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.924543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.924549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.924565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.934524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.934596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.934616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.934623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.934632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.934648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.944539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.944597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.944617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.251 [2024-11-04 16:37:31.944624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.251 [2024-11-04 16:37:31.944630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.251 [2024-11-04 16:37:31.944646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.251 qpair failed and we were unable to recover it. 00:26:05.251 [2024-11-04 16:37:31.954572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.251 [2024-11-04 16:37:31.954633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.251 [2024-11-04 16:37:31.954647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:31.954654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:31.954661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:31.954676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:31.964645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:31.964711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:31.964725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:31.964732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:31.964738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:31.964754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:31.974651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:31.974708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:31.974721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:31.974729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:31.974735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:31.974750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:31.984671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:31.984731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:31.984745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:31.984751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:31.984758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:31.984773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:31.994693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:31.994750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:31.994764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:31.994771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:31.994778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:31.994792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.004737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.004791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.004804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.004811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.004818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.004833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.014739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.014791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.014804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.014811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.014818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.014833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.024783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.024840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.024858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.024865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.024871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.024887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.034829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.034890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.034904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.034911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.034918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.034933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.044822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.044876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.044890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.044897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.044903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.044918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.054896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.054953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.054967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.252 [2024-11-04 16:37:32.054975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.252 [2024-11-04 16:37:32.054981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.252 [2024-11-04 16:37:32.054996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.252 qpair failed and we were unable to recover it. 00:26:05.252 [2024-11-04 16:37:32.064874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.252 [2024-11-04 16:37:32.064953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.252 [2024-11-04 16:37:32.064966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.253 [2024-11-04 16:37:32.064974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.253 [2024-11-04 16:37:32.064983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.253 [2024-11-04 16:37:32.064998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.253 qpair failed and we were unable to recover it. 00:26:05.515 [2024-11-04 16:37:32.074838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.515 [2024-11-04 16:37:32.074894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.515 [2024-11-04 16:37:32.074907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.515 [2024-11-04 16:37:32.074914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.515 [2024-11-04 16:37:32.074920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.515 [2024-11-04 16:37:32.074935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.515 qpair failed and we were unable to recover it. 00:26:05.515 [2024-11-04 16:37:32.084965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.515 [2024-11-04 16:37:32.085021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.515 [2024-11-04 16:37:32.085035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.085042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.085049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.085065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.094955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.095010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.095023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.095031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.095037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.095052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.104991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.105054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.105068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.105075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.105082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.105097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.115019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.115074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.115087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.115094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.115101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.115115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.125039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.125124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.125138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.125145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.125151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.125165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.135073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.135128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.135142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.135148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.135155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.135170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.145120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.145207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.145222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.145230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.145236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.145252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.155070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.155128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.155146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.155153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.155159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.155174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.165172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.165226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.165242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.165250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.165257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.165273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.175204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.175283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.175296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.175304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.175311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.175327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.185221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.185318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.185332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.185339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.185346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.185361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.195249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.195307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.195321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.195331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.195338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.195353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.205275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.516 [2024-11-04 16:37:32.205332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.516 [2024-11-04 16:37:32.205345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.516 [2024-11-04 16:37:32.205353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.516 [2024-11-04 16:37:32.205359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.516 [2024-11-04 16:37:32.205375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.516 qpair failed and we were unable to recover it. 00:26:05.516 [2024-11-04 16:37:32.215301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.215374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.215388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.215395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.215401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.215416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.225334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.225393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.225408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.225416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.225422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.225438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.235343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.235402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.235415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.235423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.235430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.235446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.245389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.245447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.245461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.245469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.245475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.245490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.255421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.255478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.255492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.255500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.255506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.255521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.265461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.265517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.265531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.265538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.265544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.265559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.275484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.275538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.275552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.275559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.275566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.275581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.285527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.285586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.285604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.285612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.285618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.285633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.295530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.295587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.295606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.295613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.295620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.295635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.305571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.305637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.305650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.305658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.305664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.305679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.315603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.315709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.315722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.315729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.517 [2024-11-04 16:37:32.315736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.517 [2024-11-04 16:37:32.315751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.517 qpair failed and we were unable to recover it. 00:26:05.517 [2024-11-04 16:37:32.325653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.517 [2024-11-04 16:37:32.325707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.517 [2024-11-04 16:37:32.325720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.517 [2024-11-04 16:37:32.325732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.518 [2024-11-04 16:37:32.325738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.518 [2024-11-04 16:37:32.325754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.518 qpair failed and we were unable to recover it. 00:26:05.518 [2024-11-04 16:37:32.335637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.518 [2024-11-04 16:37:32.335695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.518 [2024-11-04 16:37:32.335709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.518 [2024-11-04 16:37:32.335717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.518 [2024-11-04 16:37:32.335723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.518 [2024-11-04 16:37:32.335738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.518 qpair failed and we were unable to recover it. 00:26:05.840 [2024-11-04 16:37:32.345687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.840 [2024-11-04 16:37:32.345748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.840 [2024-11-04 16:37:32.345766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.840 [2024-11-04 16:37:32.345775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.840 [2024-11-04 16:37:32.345781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.840 [2024-11-04 16:37:32.345799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.840 qpair failed and we were unable to recover it. 00:26:05.840 [2024-11-04 16:37:32.355725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.840 [2024-11-04 16:37:32.355785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.840 [2024-11-04 16:37:32.355800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.840 [2024-11-04 16:37:32.355808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.840 [2024-11-04 16:37:32.355815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.355831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.365738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.365794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.365809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.365816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.365823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.365841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.375772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.375852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.375868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.375876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.375883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.375899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.385794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.385851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.385865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.385872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.385879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.385895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.395830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.395890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.395904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.395911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.395918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.395933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.405856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.405908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.405922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.405929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.405936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.405951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.415902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.415987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.416000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.416007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.416014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.416029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.425937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.426183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.426199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.426206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.426213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.426229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.435936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.435995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.436009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.436016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.436023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.436037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.446023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.446078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.446093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.446100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.446106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.446121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.456024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.841 [2024-11-04 16:37:32.456083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.841 [2024-11-04 16:37:32.456100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.841 [2024-11-04 16:37:32.456108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.841 [2024-11-04 16:37:32.456114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.841 [2024-11-04 16:37:32.456131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.841 qpair failed and we were unable to recover it. 00:26:05.841 [2024-11-04 16:37:32.466025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.466081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.466095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.466102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.466109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.466124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.476096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.476154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.476168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.476175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.476182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.476197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.486078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.486132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.486146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.486153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.486159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.486175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.496108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.496164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.496178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.496185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.496195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.496210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.506075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.506132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.506146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.506153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.506160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.506175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.516120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.516183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.516197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.516204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.516211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.516227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.526196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.526265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.526278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.526285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.526292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.526308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.536262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.536321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.536334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.536341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.536348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.536362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.546260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.546320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.546334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.546341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.546347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.546362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.556279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.556332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.556346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.556353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.556359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.556374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.566347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.566402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.566416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.566424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.566431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.566446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.842 [2024-11-04 16:37:32.576355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.842 [2024-11-04 16:37:32.576409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.842 [2024-11-04 16:37:32.576422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.842 [2024-11-04 16:37:32.576429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.842 [2024-11-04 16:37:32.576436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.842 [2024-11-04 16:37:32.576451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.842 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.586390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.586465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.586483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.586490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.586496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.586511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.596454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.596510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.596524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.596531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.596538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.596553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.606411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.606507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.606522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.606529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.606536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.606551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.616454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.616534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.616549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.616557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.616563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.616578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.626478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.626536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.626550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.626558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.626568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.626584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.636520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.636578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.636592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.636605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.636612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.636628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:05.843 [2024-11-04 16:37:32.646584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.843 [2024-11-04 16:37:32.646646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.843 [2024-11-04 16:37:32.646661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.843 [2024-11-04 16:37:32.646669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.843 [2024-11-04 16:37:32.646676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:05.843 [2024-11-04 16:37:32.646693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:05.843 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.656587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.656650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.656665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.656672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.656678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.656696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.666599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.666664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.666678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.666685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.666692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.666707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.676631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.676692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.676706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.676714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.676720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.676735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.686668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.686728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.686741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.686749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.686755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.686771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.696671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.696727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.696741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.696748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.696755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.696769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.706692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.706748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.706762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.706769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.706776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.706791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.716691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.716762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.716779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.716787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.716793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.716808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.726789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.726844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.726858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.726864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.726871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.726887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.736735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.736787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.736801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.736808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.736815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.736830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.746797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.746852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.746866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.746873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.104 [2024-11-04 16:37:32.746880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.104 [2024-11-04 16:37:32.746895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.104 qpair failed and we were unable to recover it. 00:26:06.104 [2024-11-04 16:37:32.756843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.104 [2024-11-04 16:37:32.756902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.104 [2024-11-04 16:37:32.756916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.104 [2024-11-04 16:37:32.756928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.756934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.756950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.766799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.766853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.766866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.766874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.766880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.766895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.776804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.776854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.776869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.776876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.776882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.776896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.786884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.786950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.786964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.786972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.786979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.786993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.796866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.796923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.796937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.796944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.796950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.796970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.806940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.806994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.807008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.807015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.807021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.807036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.816967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.817020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.817034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.817041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.817047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.817061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.826942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.826999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.827012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.827020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.827026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.827041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.837069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.837153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.837169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.837176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.837182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.837198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.847114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.847175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.847189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.847197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.847204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.847219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.857113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.857169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.857183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.857190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.857197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.857211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.867166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.867221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.867234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.867242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.867248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.867264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.877100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.877162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.877175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.105 [2024-11-04 16:37:32.877183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.105 [2024-11-04 16:37:32.877189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.105 [2024-11-04 16:37:32.877204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.105 qpair failed and we were unable to recover it. 00:26:06.105 [2024-11-04 16:37:32.887198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.105 [2024-11-04 16:37:32.887248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.105 [2024-11-04 16:37:32.887262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.106 [2024-11-04 16:37:32.887272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.106 [2024-11-04 16:37:32.887278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.106 [2024-11-04 16:37:32.887294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.106 qpair failed and we were unable to recover it. 00:26:06.106 [2024-11-04 16:37:32.897151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.106 [2024-11-04 16:37:32.897215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.106 [2024-11-04 16:37:32.897228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.106 [2024-11-04 16:37:32.897236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.106 [2024-11-04 16:37:32.897242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.106 [2024-11-04 16:37:32.897256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.106 qpair failed and we were unable to recover it. 00:26:06.106 [2024-11-04 16:37:32.907199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.106 [2024-11-04 16:37:32.907265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.106 [2024-11-04 16:37:32.907278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.106 [2024-11-04 16:37:32.907285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.106 [2024-11-04 16:37:32.907292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.106 [2024-11-04 16:37:32.907306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.106 qpair failed and we were unable to recover it. 00:26:06.106 [2024-11-04 16:37:32.917288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.106 [2024-11-04 16:37:32.917346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.106 [2024-11-04 16:37:32.917360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.106 [2024-11-04 16:37:32.917367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.106 [2024-11-04 16:37:32.917375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.106 [2024-11-04 16:37:32.917390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.106 qpair failed and we were unable to recover it. 00:26:06.106 [2024-11-04 16:37:32.927329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.365 [2024-11-04 16:37:32.927413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.365 [2024-11-04 16:37:32.927427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.365 [2024-11-04 16:37:32.927434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.365 [2024-11-04 16:37:32.927440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.365 [2024-11-04 16:37:32.927458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.365 qpair failed and we were unable to recover it. 00:26:06.365 [2024-11-04 16:37:32.937358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.937442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.937456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.937463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.937469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.937484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.947390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.947446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.947461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.947468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.947474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.947489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.957418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.957476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.957490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.957498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.957505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.957521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.967441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.967498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.967513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.967520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.967527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.967542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.977442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.977498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.977512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.977521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.977527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.977542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.987457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.987516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.987530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.987537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.987543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.987558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:32.997464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:32.997534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:32.997547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:32.997555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:32.997561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:32.997575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.007533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.007606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.007622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.007630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.007637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.007652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.017586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.017641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.017658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.017665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.017671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.017686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.027621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.027699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.027721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.027729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.027736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.027753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.037657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.037707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.037720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.037727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.037734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.037749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.047687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.047784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.047797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.047804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.047810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.047826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.057767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.057825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.366 [2024-11-04 16:37:33.057838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.366 [2024-11-04 16:37:33.057845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.366 [2024-11-04 16:37:33.057855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.366 [2024-11-04 16:37:33.057869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.366 qpair failed and we were unable to recover it. 00:26:06.366 [2024-11-04 16:37:33.067754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.366 [2024-11-04 16:37:33.067814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.067828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.067835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.067841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.067857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.077770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.077829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.077842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.077849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.077856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.077870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.087786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.087842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.087855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.087863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.087869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.087884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.097818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.097872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.097885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.097892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.097899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.097914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.107897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.107977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.107990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.107997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.108003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.108018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.117886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.117942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.117956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.117963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.117968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.117983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.127953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.128013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.128027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.128035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.128041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.128056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.137936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.137990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.138004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.138010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.138017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.138032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.147970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.148028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.148044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.148051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.148058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.148073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.158008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.158062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.158075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.158083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.158089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.158104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.168048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.168112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.168126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.168133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.168139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.168154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.178058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.178115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.178129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.178136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.178143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.178158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.367 [2024-11-04 16:37:33.188079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.367 [2024-11-04 16:37:33.188171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.367 [2024-11-04 16:37:33.188185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.367 [2024-11-04 16:37:33.188192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.367 [2024-11-04 16:37:33.188201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.367 [2024-11-04 16:37:33.188215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.367 qpair failed and we were unable to recover it. 00:26:06.627 [2024-11-04 16:37:33.198097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.627 [2024-11-04 16:37:33.198166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.627 [2024-11-04 16:37:33.198180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.627 [2024-11-04 16:37:33.198187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.198193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.198208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.208194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.208264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.208277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.208284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.208290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.208305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.218129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.218181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.218194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.218201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.218208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.218222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.228243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.228301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.228314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.228321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.228328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.228343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.238236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.238290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.238303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.238310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.238317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.238332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.248259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.248317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.248331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.248339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.248345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.248360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.258283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.258340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.258354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.258361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.258368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.258383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.268319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.268375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.268389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.268396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.268402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.268418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.278354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.278412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.278429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.278436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.278442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.278457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.288368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.628 [2024-11-04 16:37:33.288446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.628 [2024-11-04 16:37:33.288460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.628 [2024-11-04 16:37:33.288468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.628 [2024-11-04 16:37:33.288474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.628 [2024-11-04 16:37:33.288489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.628 qpair failed and we were unable to recover it. 00:26:06.628 [2024-11-04 16:37:33.298391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.298442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.298456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.298463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.298469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.298484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.308431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.308508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.308523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.308529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.308536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.308550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.318402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.318471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.318485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.318495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.318501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.318516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.328501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.328566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.328580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.328587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.328594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.328614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.338507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.338575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.338598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.338611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.338617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.338638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.348548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.348627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.348641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.348648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.348655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.348671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.358576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.358637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.358651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.358659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.358665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.358683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.368597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.368656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.368670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.368677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.368684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.368699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.378635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.378685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.378699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.378706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.378713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.629 [2024-11-04 16:37:33.378728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.629 qpair failed and we were unable to recover it. 00:26:06.629 [2024-11-04 16:37:33.388650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.629 [2024-11-04 16:37:33.388717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.629 [2024-11-04 16:37:33.388731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.629 [2024-11-04 16:37:33.388739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.629 [2024-11-04 16:37:33.388745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.388761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.398693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.398750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.398763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.398770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.398777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.398792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.408761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.408826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.408839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.408847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.408852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.408867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.418750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.418809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.418823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.418830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.418836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.418851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.428782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.428840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.428855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.428863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.428869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.428884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.438799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.438860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.438874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.438881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.438887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.438903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.630 [2024-11-04 16:37:33.448826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.630 [2024-11-04 16:37:33.448882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.630 [2024-11-04 16:37:33.448895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.630 [2024-11-04 16:37:33.448906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.630 [2024-11-04 16:37:33.448912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.630 [2024-11-04 16:37:33.448927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.630 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.458908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.458974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.458988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.458995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.459001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.459016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.468916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.468996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.469011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.469018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.469025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.469041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.478890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.478949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.478962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.478970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.478977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.478992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.488965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.489034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.489048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.489055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.489061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.489078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.498899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.498950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.498963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.498970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.498976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.498991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.508994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.509048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.509062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.509069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.509075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.509090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.519042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.519097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.519110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.519117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.519123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.519138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.529079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.529133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.529146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.529153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.529159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.529175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.539095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.539151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.539165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.539173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.539179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.539195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.549130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.549211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.549224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.549231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.549237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.549252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.559180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.559260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.559275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.559282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.559288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.890 [2024-11-04 16:37:33.559303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.890 qpair failed and we were unable to recover it. 00:26:06.890 [2024-11-04 16:37:33.569170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.890 [2024-11-04 16:37:33.569235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.890 [2024-11-04 16:37:33.569249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.890 [2024-11-04 16:37:33.569256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.890 [2024-11-04 16:37:33.569262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.569278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.579242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.579308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.579325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.579333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.579338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.579354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.589257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.589325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.589339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.589346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.589353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.589367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.599258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.599317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.599331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.599339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.599345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.599360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.609257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.609311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.609324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.609331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.609337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.609353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.619365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.619428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.619442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.619450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.619459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.619474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.629337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.629409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.629423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.629430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.629436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.629451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.639364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.639455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.639469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.639476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.639482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.639497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.649451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.649505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.649519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.649526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.649533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.649548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.659425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.659478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.659492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.659500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.659506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.659521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.669469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.669526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.669540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.669547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.669554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.669569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.679531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.679589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.679608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.679616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.679622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.679638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.689442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.689529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.689543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.689549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.689555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.891 [2024-11-04 16:37:33.689570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.891 qpair failed and we were unable to recover it. 00:26:06.891 [2024-11-04 16:37:33.699463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.891 [2024-11-04 16:37:33.699517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.891 [2024-11-04 16:37:33.699531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.891 [2024-11-04 16:37:33.699538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.891 [2024-11-04 16:37:33.699545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.892 [2024-11-04 16:37:33.699560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.892 qpair failed and we were unable to recover it. 00:26:06.892 [2024-11-04 16:37:33.709581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.892 [2024-11-04 16:37:33.709645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.892 [2024-11-04 16:37:33.709663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.892 [2024-11-04 16:37:33.709670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.892 [2024-11-04 16:37:33.709676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:06.892 [2024-11-04 16:37:33.709692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.892 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.719550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.719610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.719624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.719632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.719638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.719654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.729658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.729766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.729780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.729788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.729794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.729810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.739648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.739701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.739714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.739721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.739728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.739743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.749716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.749770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.749783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.749790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.749799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.749815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.759770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.759840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.759854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.759861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.759867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.759883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.769746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.769835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.769849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.769856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.769862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.769877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.779772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.779821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.779834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.779841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.779848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.779863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.789828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.789911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.789925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.789932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.789938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.789953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.799843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.799899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.799912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.799919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.799926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.799942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.809908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.809973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.809986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.809993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.809999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.810014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.819884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.819938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.819953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.819960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.819967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.819982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.829918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.829993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.830007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.830014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.830020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.830035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.152 qpair failed and we were unable to recover it. 00:26:07.152 [2024-11-04 16:37:33.839907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.152 [2024-11-04 16:37:33.839961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.152 [2024-11-04 16:37:33.839977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.152 [2024-11-04 16:37:33.839984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.152 [2024-11-04 16:37:33.839991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.152 [2024-11-04 16:37:33.840006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.849980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.850038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.850053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.850061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.850068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.850083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.860017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.860075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.860089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.860096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.860103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.860118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.870034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.870094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.870108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.870115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.870122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.870136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.880100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.880162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.880176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.880187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.880193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.880209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.890076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.890131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.890146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.890154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.890160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.890176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.900155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.900207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.900221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.900228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.900235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.900249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.910150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.910203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.910217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.910224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.910231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.910246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.920180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.920237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.920251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.920258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.920264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.920283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.930195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.930247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.930261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.930268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.930274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.930289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.940224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.940326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.940339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.940346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.940352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.940368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.950263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.950316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.950332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.950339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.950346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.950362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.960296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.960356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.960370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.960378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.960384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.960399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.153 [2024-11-04 16:37:33.970350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.153 [2024-11-04 16:37:33.970432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.153 [2024-11-04 16:37:33.970447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.153 [2024-11-04 16:37:33.970454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.153 [2024-11-04 16:37:33.970460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.153 [2024-11-04 16:37:33.970476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.153 qpair failed and we were unable to recover it. 00:26:07.413 [2024-11-04 16:37:33.980349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.413 [2024-11-04 16:37:33.980409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.413 [2024-11-04 16:37:33.980423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.413 [2024-11-04 16:37:33.980430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.413 [2024-11-04 16:37:33.980436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.413 [2024-11-04 16:37:33.980452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.413 qpair failed and we were unable to recover it. 00:26:07.413 [2024-11-04 16:37:33.990386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.413 [2024-11-04 16:37:33.990448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.413 [2024-11-04 16:37:33.990462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.413 [2024-11-04 16:37:33.990469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.413 [2024-11-04 16:37:33.990475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.413 [2024-11-04 16:37:33.990490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.413 qpair failed and we were unable to recover it. 00:26:07.413 [2024-11-04 16:37:34.000392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.413 [2024-11-04 16:37:34.000446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.413 [2024-11-04 16:37:34.000460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.413 [2024-11-04 16:37:34.000468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.413 [2024-11-04 16:37:34.000474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.413 [2024-11-04 16:37:34.000490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.413 qpair failed and we were unable to recover it. 00:26:07.413 [2024-11-04 16:37:34.010433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.413 [2024-11-04 16:37:34.010485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.413 [2024-11-04 16:37:34.010498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.413 [2024-11-04 16:37:34.010508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.413 [2024-11-04 16:37:34.010515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.413 [2024-11-04 16:37:34.010530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.413 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.020441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.020497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.020512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.020519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.020525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.020540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.030531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.030592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.030611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.030619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.030625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.030640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.040561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.040622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.040635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.040643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.040650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.040665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.050537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.050590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.050610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.050618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.050624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.050642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.060510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.060564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.060581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.060591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.060607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.060623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.070620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.070677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.070694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.070702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.070710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.070728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.080639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.080698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.080713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.080720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.080728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.080743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.090610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.090666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.090680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.090688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.090694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.090709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.100691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.100745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.100758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.100766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.100772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.100787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.110666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.110723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.110738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.110745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.110752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.110767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.120744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.120800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.120814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.120821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.120828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.120843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.130742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.130794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.130808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.130815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.130821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.130837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.140807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.414 [2024-11-04 16:37:34.140863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.414 [2024-11-04 16:37:34.140880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.414 [2024-11-04 16:37:34.140888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.414 [2024-11-04 16:37:34.140895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.414 [2024-11-04 16:37:34.140910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.414 qpair failed and we were unable to recover it. 00:26:07.414 [2024-11-04 16:37:34.150811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.150918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.150932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.150940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.150946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.150962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.160853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.160909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.160923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.160931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.160937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.160952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.170894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.170948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.170962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.170969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.170975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.170990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.180957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.181056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.181070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.181077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.181086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.181101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.190940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.190996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.191010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.191017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.191024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.191041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.200911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.200966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.200980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.200987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.200994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.201008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.211015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.211071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.211085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.211092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.211099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.211114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.220971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.221028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.221041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.221048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.221055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.221070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.415 [2024-11-04 16:37:34.231009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.415 [2024-11-04 16:37:34.231070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.415 [2024-11-04 16:37:34.231083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.415 [2024-11-04 16:37:34.231091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.415 [2024-11-04 16:37:34.231098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.415 [2024-11-04 16:37:34.231112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.415 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.241039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.241095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.241108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.241116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.241122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.241137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.251074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.251127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.251141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.251148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.251154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.251169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.261141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.261196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.261210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.261217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.261223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.261238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.271188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.271244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.271260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.271268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.271274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.271288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.281218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.281275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.281288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.281295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.281302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.281317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.291158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.291233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.291247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.291254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.291260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.291275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.301255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.301314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.301327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.301334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.301342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.301357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.311241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.311298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.311312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.311319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.311329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.311344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.321349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.321422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.321435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.321442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.321449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.321465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.331372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.331429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.331443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.331451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.331457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.675 [2024-11-04 16:37:34.331473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.675 qpair failed and we were unable to recover it. 00:26:07.675 [2024-11-04 16:37:34.341417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.675 [2024-11-04 16:37:34.341484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.675 [2024-11-04 16:37:34.341498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.675 [2024-11-04 16:37:34.341506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.675 [2024-11-04 16:37:34.341513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.341529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.351348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.351408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.351422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.351430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.351437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.351451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.361454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.361513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.361527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.361534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.361541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.361555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.371507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.371572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.371586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.371593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.371606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.371622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.381487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.381552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.381566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.381573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.381579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.381594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.391525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.391629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.391644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.391651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.391657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.391671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.401491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.401554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.401568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.401576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.401582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.401599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.411580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.411639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.411653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.411660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.411667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.411682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.421598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.421659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.421673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.421679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.421686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.421701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.431647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.431702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.431715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.431722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.431729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.431745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.441675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.441733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.441746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.441757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.441763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.441779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.451693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.451786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.451800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.451807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.451813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.451829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.461744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.461800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.461816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.461824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.461830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.676 [2024-11-04 16:37:34.461845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.676 qpair failed and we were unable to recover it. 00:26:07.676 [2024-11-04 16:37:34.471765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.676 [2024-11-04 16:37:34.471820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.676 [2024-11-04 16:37:34.471834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.676 [2024-11-04 16:37:34.471841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.676 [2024-11-04 16:37:34.471847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.677 [2024-11-04 16:37:34.471862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.677 qpair failed and we were unable to recover it. 00:26:07.677 [2024-11-04 16:37:34.481788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.677 [2024-11-04 16:37:34.481846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.677 [2024-11-04 16:37:34.481859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.677 [2024-11-04 16:37:34.481866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.677 [2024-11-04 16:37:34.481873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.677 [2024-11-04 16:37:34.481891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.677 qpair failed and we were unable to recover it. 00:26:07.677 [2024-11-04 16:37:34.491863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.677 [2024-11-04 16:37:34.491923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.677 [2024-11-04 16:37:34.491938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.677 [2024-11-04 16:37:34.491946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.677 [2024-11-04 16:37:34.491952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.677 [2024-11-04 16:37:34.491968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.677 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.501838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.501891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.501905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.501912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.501918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.501933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.511874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.511930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.511943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.511950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.511956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.511971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.521888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.521963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.521978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.521985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.521991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.522006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.531906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.531961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.531977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.531984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.531991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.532006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.541992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.542055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.542069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.542076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.542082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.542099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.552016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.552096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.552111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.552118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.552124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.552139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.562005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.562082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.562096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.562104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.562110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.562125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.572016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.572076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.572090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.572100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.572107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.572121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.582054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.582105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.582119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.582126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.582132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.582147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.592090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.592151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.592164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.936 [2024-11-04 16:37:34.592172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.936 [2024-11-04 16:37:34.592179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.936 [2024-11-04 16:37:34.592194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.936 qpair failed and we were unable to recover it. 00:26:07.936 [2024-11-04 16:37:34.602111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.936 [2024-11-04 16:37:34.602170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.936 [2024-11-04 16:37:34.602183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.602190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.602197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.602211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.612131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.612185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.612199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.612206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.612213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.612231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.622207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.622313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.622327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.622335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.622341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.622356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.632202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.632260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.632273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.632280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.632288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.632303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.642219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.642316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.642330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.642337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.642343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.642359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.652248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.652325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.652339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.652346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.652352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.652367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.662277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.662332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.662346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.662354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.662360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.662375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.672310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.672368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.672382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.672389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.672396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.672411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.682340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.682393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.682406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.682414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.682420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.682435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.692397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.692453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.692467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.692474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.692481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.692496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.702383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.702437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.702454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.702461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.702468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.702483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.712430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.712486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.712500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.712507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.712514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.712529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.722468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.937 [2024-11-04 16:37:34.722525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.937 [2024-11-04 16:37:34.722539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.937 [2024-11-04 16:37:34.722547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.937 [2024-11-04 16:37:34.722553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.937 [2024-11-04 16:37:34.722568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.937 qpair failed and we were unable to recover it. 00:26:07.937 [2024-11-04 16:37:34.732472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.938 [2024-11-04 16:37:34.732526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.938 [2024-11-04 16:37:34.732540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.938 [2024-11-04 16:37:34.732547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.938 [2024-11-04 16:37:34.732554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.938 [2024-11-04 16:37:34.732568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.938 qpair failed and we were unable to recover it. 00:26:07.938 [2024-11-04 16:37:34.742495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.938 [2024-11-04 16:37:34.742550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.938 [2024-11-04 16:37:34.742564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.938 [2024-11-04 16:37:34.742571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.938 [2024-11-04 16:37:34.742581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.938 [2024-11-04 16:37:34.742596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.938 qpair failed and we were unable to recover it. 00:26:07.938 [2024-11-04 16:37:34.752546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.938 [2024-11-04 16:37:34.752611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.938 [2024-11-04 16:37:34.752625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.938 [2024-11-04 16:37:34.752632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.938 [2024-11-04 16:37:34.752638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:07.938 [2024-11-04 16:37:34.752653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.938 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.762566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.762654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.762668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.762676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.762682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.762697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.772598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.772658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.772672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.772679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.772686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.772701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.782624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.782681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.782694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.782701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.782708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.782723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.792664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.792741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.792755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.792762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.792768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.792783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.802760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.802861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.802874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.802881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.802887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.802902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.812639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.812699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.197 [2024-11-04 16:37:34.812713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.197 [2024-11-04 16:37:34.812721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.197 [2024-11-04 16:37:34.812727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.197 [2024-11-04 16:37:34.812742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.197 qpair failed and we were unable to recover it. 00:26:08.197 [2024-11-04 16:37:34.822726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.197 [2024-11-04 16:37:34.822793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.822808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.822815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.822821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.822836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.832784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.832844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.832862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.832869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.832876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.832891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.842781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.842837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.842851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.842858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.842865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.842879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.852834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.852891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.852904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.852912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.852918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.852933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.862862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.862917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.862931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.862939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.862945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.862960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.872939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.873019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.873034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.873042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.873051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.873066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.882878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.882937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.882950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.882957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.882964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.882979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.893005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.893062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.893076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.893083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.893089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.893104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.902962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.903017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.903030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.903037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.903045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.903062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.912938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.912995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.913009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.913016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.913022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.913038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.923040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.923101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.923115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.923122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.923129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.923145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.933092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.933149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.933163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.933171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.933178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.933193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.943068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.943120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.198 [2024-11-04 16:37:34.943134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.198 [2024-11-04 16:37:34.943141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.198 [2024-11-04 16:37:34.943148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.198 [2024-11-04 16:37:34.943164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.198 qpair failed and we were unable to recover it. 00:26:08.198 [2024-11-04 16:37:34.953119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.198 [2024-11-04 16:37:34.953188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:34.953202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:34.953209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:34.953215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:34.953230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:34.963206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:34.963313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:34.963326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:34.963333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:34.963339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:34.963355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:34.973167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:34.973253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:34.973267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:34.973274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:34.973280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:34.973295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:34.983113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:34.983176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:34.983190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:34.983198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:34.983204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:34.983219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:34.993147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:34.993210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:34.993224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:34.993232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:34.993238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:34.993254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:35.003253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:35.003308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:35.003321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:35.003334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:35.003340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:35.003355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.199 [2024-11-04 16:37:35.013282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.199 [2024-11-04 16:37:35.013336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.199 [2024-11-04 16:37:35.013350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.199 [2024-11-04 16:37:35.013357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.199 [2024-11-04 16:37:35.013364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.199 [2024-11-04 16:37:35.013378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.199 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.023332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.023407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.023422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.023429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.023435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.023450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.033329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.033385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.033399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.033406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.033413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.033428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.043356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.043409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.043424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.043430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.043437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.043456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.053384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.053438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.053453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.053460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.053466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.053482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.063384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.063434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.063448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.063455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.063461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.063476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.073445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.073519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.073533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.073541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.073547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.073562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.083468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.083523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.083537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.083544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.083551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.083566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.093555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.093621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.093635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.093643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.093649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.093664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.103521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.103575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.103590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.103598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.103609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.103624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.113553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.113612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.113626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.113633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.113640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.113654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.123515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.458 [2024-11-04 16:37:35.123581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.458 [2024-11-04 16:37:35.123595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.458 [2024-11-04 16:37:35.123608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.458 [2024-11-04 16:37:35.123616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.458 [2024-11-04 16:37:35.123631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.458 qpair failed and we were unable to recover it. 00:26:08.458 [2024-11-04 16:37:35.133609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.133664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.133681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.133688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.133695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.133710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.143635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.143689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.143703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.143710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.143716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.143732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.153678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.153735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.153748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.153755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.153762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.153777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.163689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.163747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.163761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.163769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.163776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.163791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.173702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.173764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.173777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.173785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.173791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.173809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.183758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.183822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.183836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.183843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.183849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.183864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.193775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.193834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.193848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.193855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.193861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.193877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.203858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.203914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.203928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.203935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.203941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.203957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.213861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.213922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.213936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.213943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.213949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.213964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.223899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.223960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.223974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.223981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.223987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.224002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.233896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.233954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.233969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.233976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.233982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.233997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.243892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.243951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.243967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.243975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.243984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.244002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.253944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.254000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.459 [2024-11-04 16:37:35.254014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.459 [2024-11-04 16:37:35.254021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.459 [2024-11-04 16:37:35.254029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.459 [2024-11-04 16:37:35.254044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.459 qpair failed and we were unable to recover it. 00:26:08.459 [2024-11-04 16:37:35.263959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.459 [2024-11-04 16:37:35.264013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.460 [2024-11-04 16:37:35.264031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.460 [2024-11-04 16:37:35.264038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.460 [2024-11-04 16:37:35.264045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.460 [2024-11-04 16:37:35.264060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.460 qpair failed and we were unable to recover it. 00:26:08.460 [2024-11-04 16:37:35.273934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.460 [2024-11-04 16:37:35.273992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.460 [2024-11-04 16:37:35.274007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.460 [2024-11-04 16:37:35.274015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.460 [2024-11-04 16:37:35.274021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.460 [2024-11-04 16:37:35.274037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.460 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.284029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.284082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.284096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.284103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.719 [2024-11-04 16:37:35.284109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.719 [2024-11-04 16:37:35.284124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.719 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.294053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.294106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.294119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.294126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.719 [2024-11-04 16:37:35.294133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.719 [2024-11-04 16:37:35.294148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.719 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.304078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.304133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.304146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.304153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.719 [2024-11-04 16:37:35.304163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.719 [2024-11-04 16:37:35.304178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.719 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.314121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.314177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.314191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.314198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.719 [2024-11-04 16:37:35.314204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.719 [2024-11-04 16:37:35.314219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.719 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.324186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.324247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.324261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.324268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.719 [2024-11-04 16:37:35.324275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.719 [2024-11-04 16:37:35.324290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.719 qpair failed and we were unable to recover it. 00:26:08.719 [2024-11-04 16:37:35.334169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.719 [2024-11-04 16:37:35.334221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.719 [2024-11-04 16:37:35.334235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.719 [2024-11-04 16:37:35.334242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.334249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.334264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.344198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.344256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.344270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.344278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.344284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.344299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.354224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.354282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.354296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.354303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.354310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.354324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.364262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.364316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.364330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.364337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.364343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.364358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.374280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.374337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.374351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.374359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.374365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.374381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.384298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.384363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.384379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.384386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.384392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.384408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.394274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.394361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.394379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.394386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.394392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.394407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.404343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.404397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.404411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.404418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.404425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.404440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.414390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.414446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.414460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.414467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.414474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.414489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.424416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.424469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.424483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.424490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.424496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.424511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.434451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.434525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.434541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.434548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.434559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.434573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.444473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.444532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.444546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.444553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.444560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.444574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.454501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.454558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.454572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.454579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.454585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.720 [2024-11-04 16:37:35.454611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.720 qpair failed and we were unable to recover it. 00:26:08.720 [2024-11-04 16:37:35.464527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.720 [2024-11-04 16:37:35.464581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.720 [2024-11-04 16:37:35.464594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.720 [2024-11-04 16:37:35.464607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.720 [2024-11-04 16:37:35.464614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.464630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.474562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.474642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.474656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.474663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.474669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.474685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.484596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.484657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.484672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.484679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.484686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.484701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.494592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.494653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.494667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.494674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.494681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.494695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.504639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.504725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.504739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.504747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.504753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.504768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.514691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.514754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.514768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.514776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.514782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.514799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.524706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.524772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.524786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.524793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.524800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.524814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.721 [2024-11-04 16:37:35.534682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.721 [2024-11-04 16:37:35.534736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.721 [2024-11-04 16:37:35.534749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.721 [2024-11-04 16:37:35.534756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.721 [2024-11-04 16:37:35.534762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.721 [2024-11-04 16:37:35.534777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.721 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.544807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.544872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.544886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.544893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.544899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.544915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.554794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.554852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.554866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.554873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.554880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.554895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.564777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.564831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.564845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.564855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.564862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.564877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.574853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.574934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.574950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.574958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.574965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.574980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.584881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.584953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.584967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.584974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.584981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.584996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.594922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.594980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.594995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.595002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.595008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.595022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.604982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.605042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.605056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.605063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.605070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.605088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.614960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.981 [2024-11-04 16:37:35.615016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.981 [2024-11-04 16:37:35.615030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.981 [2024-11-04 16:37:35.615037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.981 [2024-11-04 16:37:35.615043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.981 [2024-11-04 16:37:35.615059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.981 qpair failed and we were unable to recover it. 00:26:08.981 [2024-11-04 16:37:35.624978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.625031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.625044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.625051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.625057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.625071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.634977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.635033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.635047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.635054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.635060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.635074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.645064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.645131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.645145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.645153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.645159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.645174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.655013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.655074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.655088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.655095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.655102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.655117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.665067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.665123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.665138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.665145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.665151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.665167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.675073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.675128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.675141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.675148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.675154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.675169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.685146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.685203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.685217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.685224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.685230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.685246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.695172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.695229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.695245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.695253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.695259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.695274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.705144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.705202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.705215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.705222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.705228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.705243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.715271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.715330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.715344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.715351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.715357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.715372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.725205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.725261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.725275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.725282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.725288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.725303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.735224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.735275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.735288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.735295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.735301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.735320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.745281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.745362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.982 [2024-11-04 16:37:35.745375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.982 [2024-11-04 16:37:35.745382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.982 [2024-11-04 16:37:35.745388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.982 [2024-11-04 16:37:35.745403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.982 qpair failed and we were unable to recover it. 00:26:08.982 [2024-11-04 16:37:35.755293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.982 [2024-11-04 16:37:35.755350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.983 [2024-11-04 16:37:35.755364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.983 [2024-11-04 16:37:35.755371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.983 [2024-11-04 16:37:35.755377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.983 [2024-11-04 16:37:35.755393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.983 qpair failed and we were unable to recover it. 00:26:08.983 [2024-11-04 16:37:35.765362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.983 [2024-11-04 16:37:35.765430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.983 [2024-11-04 16:37:35.765445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.983 [2024-11-04 16:37:35.765453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.983 [2024-11-04 16:37:35.765459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.983 [2024-11-04 16:37:35.765474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.983 qpair failed and we were unable to recover it. 00:26:08.983 [2024-11-04 16:37:35.775343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.983 [2024-11-04 16:37:35.775398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.983 [2024-11-04 16:37:35.775412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.983 [2024-11-04 16:37:35.775419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.983 [2024-11-04 16:37:35.775426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.983 [2024-11-04 16:37:35.775441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.983 qpair failed and we were unable to recover it. 00:26:08.983 [2024-11-04 16:37:35.785421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.983 [2024-11-04 16:37:35.785471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.983 [2024-11-04 16:37:35.785485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.983 [2024-11-04 16:37:35.785492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.983 [2024-11-04 16:37:35.785499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.983 [2024-11-04 16:37:35.785514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.983 qpair failed and we were unable to recover it. 00:26:08.983 [2024-11-04 16:37:35.795463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:08.983 [2024-11-04 16:37:35.795521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:08.983 [2024-11-04 16:37:35.795535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:08.983 [2024-11-04 16:37:35.795542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:08.983 [2024-11-04 16:37:35.795549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:08.983 [2024-11-04 16:37:35.795564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:08.983 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.805491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.805553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.805567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.805574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.805580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.805595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.815442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.815509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.815523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.815530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.815536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.815551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.825536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.825592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.825616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.825623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.825629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.825644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.835512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.835568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.835582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.835589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.835596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.835618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.845627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.845714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.845728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.845735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.845741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.845756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.855637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.855691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.855705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.855712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.855719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.855734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.865664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.865729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.865743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.865751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.865759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.865775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.875711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.875766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.875779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.875786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.875793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.875809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.885736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.885791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.885805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.885812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.885819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.885834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.895766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.243 [2024-11-04 16:37:35.895822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.243 [2024-11-04 16:37:35.895836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.243 [2024-11-04 16:37:35.895843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.243 [2024-11-04 16:37:35.895849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.243 [2024-11-04 16:37:35.895864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.243 qpair failed and we were unable to recover it. 00:26:09.243 [2024-11-04 16:37:35.905721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.905772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.905785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.905792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.905798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.905813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.915828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.915886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.915899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.915906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.915913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.915928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.925859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.925911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.925924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.925932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.925938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.925953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.935880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.935937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.935951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.935958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.935965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.935979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.945904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.945956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.945970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.945977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.945984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.945999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.955870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.955928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.955944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.955951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.955958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.955973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.965967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.966022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.966036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.966043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.966049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.966064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.975991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.976046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.976060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.976067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.976074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.976088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.986069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.986127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.986141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.986148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.986155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.986169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:35.996059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:35.996117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:35.996131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:35.996142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:35.996148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:35.996164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:36.006082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:36.006138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:36.006152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:36.006161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:36.006168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:36.006183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:36.016109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:36.016166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:36.016180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:36.016188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:36.016194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:36.016209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.244 [2024-11-04 16:37:36.026150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.244 [2024-11-04 16:37:36.026241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.244 [2024-11-04 16:37:36.026255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.244 [2024-11-04 16:37:36.026263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.244 [2024-11-04 16:37:36.026269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.244 [2024-11-04 16:37:36.026283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.244 qpair failed and we were unable to recover it. 00:26:09.245 [2024-11-04 16:37:36.036178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.245 [2024-11-04 16:37:36.036233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.245 [2024-11-04 16:37:36.036247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.245 [2024-11-04 16:37:36.036254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.245 [2024-11-04 16:37:36.036260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.245 [2024-11-04 16:37:36.036275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.245 qpair failed and we were unable to recover it. 00:26:09.245 [2024-11-04 16:37:36.046201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.245 [2024-11-04 16:37:36.046255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.245 [2024-11-04 16:37:36.046268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.245 [2024-11-04 16:37:36.046275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.245 [2024-11-04 16:37:36.046281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.245 [2024-11-04 16:37:36.046296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.245 qpair failed and we were unable to recover it. 00:26:09.245 [2024-11-04 16:37:36.056253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.245 [2024-11-04 16:37:36.056309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.245 [2024-11-04 16:37:36.056323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.245 [2024-11-04 16:37:36.056330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.245 [2024-11-04 16:37:36.056337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.245 [2024-11-04 16:37:36.056352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.245 qpair failed and we were unable to recover it. 00:26:09.245 [2024-11-04 16:37:36.066263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.066368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.066382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.066389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.066396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.066411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.076291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.076350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.076364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.076372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.076378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.076393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.086358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.086412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.086428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.086435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.086441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.086457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.096334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.096418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.096434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.096442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.096449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.096465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.106416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.106466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.106479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.106486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.106493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.106508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.116404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.505 [2024-11-04 16:37:36.116460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.505 [2024-11-04 16:37:36.116473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.505 [2024-11-04 16:37:36.116481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.505 [2024-11-04 16:37:36.116487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.505 [2024-11-04 16:37:36.116503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.505 qpair failed and we were unable to recover it. 00:26:09.505 [2024-11-04 16:37:36.126434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.126492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.126506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.126516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.126523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.126538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.136448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.136519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.136534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.136541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.136547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.136562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.146477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.146534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.146548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.146556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.146562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.146577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.156532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.156591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.156610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.156618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.156624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.156640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.166670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.166777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.166791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.166798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.166804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.166824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.176592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.176664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.176678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.176685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.176691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.176706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.186640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.186693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.186706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.186713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.186719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.186735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.196710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.196783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.196798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.196805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.196811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.196827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.206700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.206759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.206772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.206779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.206786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.206801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.216713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.216777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.216791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.216798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.216804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.216819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.226725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.226778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.226792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.226799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.226805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.226820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.236742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.236800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.236814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.236821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.236828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.236842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.246751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.246806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.506 [2024-11-04 16:37:36.246820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.506 [2024-11-04 16:37:36.246826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.506 [2024-11-04 16:37:36.246833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.506 [2024-11-04 16:37:36.246848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.506 qpair failed and we were unable to recover it. 00:26:09.506 [2024-11-04 16:37:36.256735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.506 [2024-11-04 16:37:36.256793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.256812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.256820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.256827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.256842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.266888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.266953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.266967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.266975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.266981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.266996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.276860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.276917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.276932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.276939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.276945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.276960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.286832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.286885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.286898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.286905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.286911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.286927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.296931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.296987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.297001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.297008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.297018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.297033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.306975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.307043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.307057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.307064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.307071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.307087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.316972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.317046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.317059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.317066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.317072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.317088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.507 [2024-11-04 16:37:36.326992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.507 [2024-11-04 16:37:36.327043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.507 [2024-11-04 16:37:36.327056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.507 [2024-11-04 16:37:36.327063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.507 [2024-11-04 16:37:36.327069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.507 [2024-11-04 16:37:36.327085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.507 qpair failed and we were unable to recover it. 00:26:09.767 [2024-11-04 16:37:36.337079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.767 [2024-11-04 16:37:36.337138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.767 [2024-11-04 16:37:36.337152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.767 [2024-11-04 16:37:36.337159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.767 [2024-11-04 16:37:36.337165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.767 [2024-11-04 16:37:36.337179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.767 qpair failed and we were unable to recover it. 00:26:09.767 [2024-11-04 16:37:36.347056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.767 [2024-11-04 16:37:36.347112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.767 [2024-11-04 16:37:36.347126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.767 [2024-11-04 16:37:36.347134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.767 [2024-11-04 16:37:36.347140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.767 [2024-11-04 16:37:36.347155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.767 qpair failed and we were unable to recover it. 00:26:09.767 [2024-11-04 16:37:36.357072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.767 [2024-11-04 16:37:36.357133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.767 [2024-11-04 16:37:36.357147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.767 [2024-11-04 16:37:36.357155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.767 [2024-11-04 16:37:36.357161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.767 [2024-11-04 16:37:36.357176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.767 qpair failed and we were unable to recover it. 00:26:09.767 [2024-11-04 16:37:36.367117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.767 [2024-11-04 16:37:36.367178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.767 [2024-11-04 16:37:36.367191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.767 [2024-11-04 16:37:36.367199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.767 [2024-11-04 16:37:36.367206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.767 [2024-11-04 16:37:36.367220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.767 qpair failed and we were unable to recover it. 00:26:09.767 [2024-11-04 16:37:36.377137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.767 [2024-11-04 16:37:36.377224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.767 [2024-11-04 16:37:36.377238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.377245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.377251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.377265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.387094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.387171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.387190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.387197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.387204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.387219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.397208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.397266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.397280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.397287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.397293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.397308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.407238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.407292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.407306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.407313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.407319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.407334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.417262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.417321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.417335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.417342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.417349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.417365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.427287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.427345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.427358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.427365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.427375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.427389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.437327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.437395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.437408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.437415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.437422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.437436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.447370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.447429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.447443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.447450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.447457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.447471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.457381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.457435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.457450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.457457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.457463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.457478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.467421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.467503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.467518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.467525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.467531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.467547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.477432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.477511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.768 [2024-11-04 16:37:36.477525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.768 [2024-11-04 16:37:36.477532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.768 [2024-11-04 16:37:36.477537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.768 [2024-11-04 16:37:36.477552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.768 qpair failed and we were unable to recover it. 00:26:09.768 [2024-11-04 16:37:36.487473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.768 [2024-11-04 16:37:36.487530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.487543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.487550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.487557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.487572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.497467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.497523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.497537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.497544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.497551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.497566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.507514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.507569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.507583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.507590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.507596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.507618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.517560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.517629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.517648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.517656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.517662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.517677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.527610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.527664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.527678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.527685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.527692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.527707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.537594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.537665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.537680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.537687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.537693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.537708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.547672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.547724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.547738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.547746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.547752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a4000b90 00:26:09.769 [2024-11-04 16:37:36.547768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.557690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.557763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.557799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.557818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.557829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe03ba0 00:26:09.769 [2024-11-04 16:37:36.557856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.567680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.567741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.567757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.567766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.567773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe03ba0 00:26:09.769 [2024-11-04 16:37:36.567789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.577757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.577842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.577871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.577885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.577896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a0000b90 00:26:09.769 [2024-11-04 16:37:36.577922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.587800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.769 [2024-11-04 16:37:36.587860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.769 [2024-11-04 16:37:36.587875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.769 [2024-11-04 16:37:36.587883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.769 [2024-11-04 16:37:36.587889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64a0000b90 00:26:09.769 [2024-11-04 16:37:36.587906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:09.769 qpair failed and we were unable to recover it. 00:26:09.769 [2024-11-04 16:37:36.588078] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:09.769 A controller has encountered a failure and is being reset. 00:26:10.028 [2024-11-04 16:37:36.597787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.028 [2024-11-04 16:37:36.597861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.028 [2024-11-04 16:37:36.597892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.029 [2024-11-04 16:37:36.597906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.029 [2024-11-04 16:37:36.597921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64ac000b90 00:26:10.029 [2024-11-04 16:37:36.597948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 [2024-11-04 16:37:36.607814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.029 [2024-11-04 16:37:36.607874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.029 [2024-11-04 16:37:36.607890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.029 [2024-11-04 16:37:36.607898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.029 [2024-11-04 16:37:36.607905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64ac000b90 00:26:10.029 [2024-11-04 16:37:36.607922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.029 qpair failed and we were unable to recover it. 00:26:10.029 Controller properly reset. 00:26:10.029 Initializing NVMe Controllers 00:26:10.029 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:10.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:10.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:10.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:10.029 Initialization complete. Launching workers. 00:26:10.029 Starting thread on core 1 00:26:10.029 Starting thread on core 2 00:26:10.029 Starting thread on core 3 00:26:10.029 Starting thread on core 0 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:10.029 00:26:10.029 real 0m10.809s 00:26:10.029 user 0m19.221s 00:26:10.029 sys 0m4.560s 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:10.029 ************************************ 00:26:10.029 END TEST nvmf_target_disconnect_tc2 00:26:10.029 ************************************ 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.029 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.029 rmmod nvme_tcp 00:26:10.029 rmmod nvme_fabrics 00:26:10.029 rmmod nvme_keyring 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2971328 ']' 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2971328 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2971328 ']' 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2971328 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2971328 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2971328' 00:26:10.288 killing process with pid 2971328 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2971328 00:26:10.288 16:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2971328 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.288 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.547 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.547 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.547 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.547 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.547 16:37:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.452 16:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.452 00:26:12.452 real 0m18.867s 00:26:12.452 user 0m46.739s 00:26:12.452 sys 0m8.964s 00:26:12.452 16:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.452 16:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 ************************************ 00:26:12.452 END TEST nvmf_target_disconnect 00:26:12.452 ************************************ 00:26:12.452 16:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:12.452 00:26:12.452 real 5m42.690s 00:26:12.452 user 10m23.947s 00:26:12.452 sys 1m51.839s 00:26:12.453 16:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.453 16:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.453 ************************************ 00:26:12.453 END TEST nvmf_host 00:26:12.453 ************************************ 00:26:12.453 16:37:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:12.453 16:37:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:12.453 16:37:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:12.453 16:37:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:12.453 16:37:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.453 16:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.712 ************************************ 00:26:12.712 START TEST nvmf_target_core_interrupt_mode 00:26:12.712 ************************************ 00:26:12.712 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:12.712 * Looking for test storage... 00:26:12.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:12.712 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.712 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.713 --rc genhtml_branch_coverage=1 00:26:12.713 --rc genhtml_function_coverage=1 00:26:12.713 --rc genhtml_legend=1 00:26:12.713 --rc geninfo_all_blocks=1 00:26:12.713 --rc geninfo_unexecuted_blocks=1 00:26:12.713 00:26:12.713 ' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.713 --rc genhtml_branch_coverage=1 00:26:12.713 --rc genhtml_function_coverage=1 00:26:12.713 --rc genhtml_legend=1 00:26:12.713 --rc geninfo_all_blocks=1 00:26:12.713 --rc geninfo_unexecuted_blocks=1 00:26:12.713 00:26:12.713 ' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.713 --rc genhtml_branch_coverage=1 00:26:12.713 --rc genhtml_function_coverage=1 00:26:12.713 --rc genhtml_legend=1 00:26:12.713 --rc geninfo_all_blocks=1 00:26:12.713 --rc geninfo_unexecuted_blocks=1 00:26:12.713 00:26:12.713 ' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.713 --rc genhtml_branch_coverage=1 00:26:12.713 --rc genhtml_function_coverage=1 00:26:12.713 --rc genhtml_legend=1 00:26:12.713 --rc geninfo_all_blocks=1 00:26:12.713 --rc geninfo_unexecuted_blocks=1 00:26:12.713 00:26:12.713 ' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.713 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:12.714 ************************************ 00:26:12.714 START TEST nvmf_abort 00:26:12.714 ************************************ 00:26:12.714 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:12.983 * Looking for test storage... 00:26:12.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:12.983 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.983 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.983 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.983 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.983 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.984 --rc genhtml_branch_coverage=1 00:26:12.984 --rc genhtml_function_coverage=1 00:26:12.984 --rc genhtml_legend=1 00:26:12.984 --rc geninfo_all_blocks=1 00:26:12.984 --rc geninfo_unexecuted_blocks=1 00:26:12.984 00:26:12.984 ' 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.984 --rc genhtml_branch_coverage=1 00:26:12.984 --rc genhtml_function_coverage=1 00:26:12.984 --rc genhtml_legend=1 00:26:12.984 --rc geninfo_all_blocks=1 00:26:12.984 --rc geninfo_unexecuted_blocks=1 00:26:12.984 00:26:12.984 ' 00:26:12.984 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.984 --rc genhtml_branch_coverage=1 00:26:12.984 --rc genhtml_function_coverage=1 00:26:12.985 --rc genhtml_legend=1 00:26:12.985 --rc geninfo_all_blocks=1 00:26:12.985 --rc geninfo_unexecuted_blocks=1 00:26:12.985 00:26:12.985 ' 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.985 --rc genhtml_branch_coverage=1 00:26:12.985 --rc genhtml_function_coverage=1 00:26:12.985 --rc genhtml_legend=1 00:26:12.985 --rc geninfo_all_blocks=1 00:26:12.985 --rc geninfo_unexecuted_blocks=1 00:26:12.985 00:26:12.985 ' 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.985 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.986 16:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:18.262 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:18.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:18.262 Found net devices under 0000:86:00.0: cvl_0_0 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:18.262 Found net devices under 0000:86:00.1: cvl_0_1 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.262 16:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.262 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.262 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.262 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.262 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:26:18.521 00:26:18.521 --- 10.0.0.2 ping statistics --- 00:26:18.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.521 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:18.521 00:26:18.521 --- 10.0.0.1 ping statistics --- 00:26:18.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.521 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2975860 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2975860 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2975860 ']' 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.521 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.521 [2024-11-04 16:37:45.253560] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:18.521 [2024-11-04 16:37:45.254507] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:26:18.521 [2024-11-04 16:37:45.254541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.521 [2024-11-04 16:37:45.321939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:18.780 [2024-11-04 16:37:45.364567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.780 [2024-11-04 16:37:45.364607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.780 [2024-11-04 16:37:45.364615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.780 [2024-11-04 16:37:45.364621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.780 [2024-11-04 16:37:45.364626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.780 [2024-11-04 16:37:45.365988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.780 [2024-11-04 16:37:45.366079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.780 [2024-11-04 16:37:45.366081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.780 [2024-11-04 16:37:45.432970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:18.781 [2024-11-04 16:37:45.432986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:18.781 [2024-11-04 16:37:45.433292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:18.781 [2024-11-04 16:37:45.433330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 [2024-11-04 16:37:45.490838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 Malloc0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 Delay0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 [2024-11-04 16:37:45.566725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.781 16:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:19.040 [2024-11-04 16:37:45.683356] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:20.943 Initializing NVMe Controllers 00:26:20.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:20.943 controller IO queue size 128 less than required 00:26:20.943 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:20.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:20.943 Initialization complete. Launching workers. 00:26:20.943 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38422 00:26:20.943 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38479, failed to submit 66 00:26:20.943 success 38422, unsuccessful 57, failed 0 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.943 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.202 rmmod nvme_tcp 00:26:21.202 rmmod nvme_fabrics 00:26:21.202 rmmod nvme_keyring 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2975860 ']' 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2975860 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2975860 ']' 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2975860 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2975860 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2975860' 00:26:21.202 killing process with pid 2975860 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2975860 00:26:21.202 16:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2975860 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.460 16:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:23.364 00:26:23.364 real 0m10.606s 00:26:23.364 user 0m10.110s 00:26:23.364 sys 0m5.331s 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:23.364 ************************************ 00:26:23.364 END TEST nvmf_abort 00:26:23.364 ************************************ 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.364 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:23.622 ************************************ 00:26:23.622 START TEST nvmf_ns_hotplug_stress 00:26:23.622 ************************************ 00:26:23.622 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:23.622 * Looking for test storage... 00:26:23.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.622 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.622 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.622 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.623 --rc genhtml_branch_coverage=1 00:26:23.623 --rc genhtml_function_coverage=1 00:26:23.623 --rc genhtml_legend=1 00:26:23.623 --rc geninfo_all_blocks=1 00:26:23.623 --rc geninfo_unexecuted_blocks=1 00:26:23.623 00:26:23.623 ' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.623 --rc genhtml_branch_coverage=1 00:26:23.623 --rc genhtml_function_coverage=1 00:26:23.623 --rc genhtml_legend=1 00:26:23.623 --rc geninfo_all_blocks=1 00:26:23.623 --rc geninfo_unexecuted_blocks=1 00:26:23.623 00:26:23.623 ' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.623 --rc genhtml_branch_coverage=1 00:26:23.623 --rc genhtml_function_coverage=1 00:26:23.623 --rc genhtml_legend=1 00:26:23.623 --rc geninfo_all_blocks=1 00:26:23.623 --rc geninfo_unexecuted_blocks=1 00:26:23.623 00:26:23.623 ' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.623 --rc genhtml_branch_coverage=1 00:26:23.623 --rc genhtml_function_coverage=1 00:26:23.623 --rc genhtml_legend=1 00:26:23.623 --rc geninfo_all_blocks=1 00:26:23.623 --rc geninfo_unexecuted_blocks=1 00:26:23.623 00:26:23.623 ' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.623 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.624 16:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.897 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:28.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:28.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:28.898 Found net devices under 0000:86:00.0: cvl_0_0 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:28.898 Found net devices under 0000:86:00.1: cvl_0_1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.898 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.157 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.157 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.157 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.157 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:26:29.157 00:26:29.157 --- 10.0.0.2 ping statistics --- 00:26:29.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.157 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:26:29.157 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:26:29.158 00:26:29.158 --- 10.0.0.1 ping statistics --- 00:26:29.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.158 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2979851 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2979851 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2979851 ']' 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.158 16:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:29.158 [2024-11-04 16:37:55.861083] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:29.158 [2024-11-04 16:37:55.862037] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:26:29.158 [2024-11-04 16:37:55.862072] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.158 [2024-11-04 16:37:55.929412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:29.158 [2024-11-04 16:37:55.970808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.158 [2024-11-04 16:37:55.970843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.158 [2024-11-04 16:37:55.970854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.158 [2024-11-04 16:37:55.970859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.158 [2024-11-04 16:37:55.970864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.158 [2024-11-04 16:37:55.972248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.158 [2024-11-04 16:37:55.972334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.158 [2024-11-04 16:37:55.972336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.417 [2024-11-04 16:37:56.038917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:29.417 [2024-11-04 16:37:56.038937] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:29.417 [2024-11-04 16:37:56.039250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:29.417 [2024-11-04 16:37:56.039294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:29.417 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:29.676 [2024-11-04 16:37:56.269032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.676 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:29.676 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.934 [2024-11-04 16:37:56.665321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.934 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.192 16:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:30.451 Malloc0 00:26:30.451 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:30.709 Delay0 00:26:30.709 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.709 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:30.967 NULL1 00:26:30.967 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:31.226 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2980117 00:26:31.226 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:31.226 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.226 16:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:31.488 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.488 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:31.488 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:31.752 true 00:26:31.752 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:31.752 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.010 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.268 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:32.268 16:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:32.268 true 00:26:32.527 16:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:32.527 16:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.461 Read completed with error (sct=0, sc=11) 00:26:33.461 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:33.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:33.719 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:33.719 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:33.719 true 00:26:33.978 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:33.978 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.978 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.237 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:34.237 16:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:34.496 true 00:26:34.496 16:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:34.496 16:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.432 16:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.691 16:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:35.691 16:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:35.949 true 00:26:35.949 16:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:35.949 16:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:36.884 16:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:36.884 16:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:36.884 16:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:37.143 true 00:26:37.143 16:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:37.143 16:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.402 16:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:37.661 16:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:37.661 16:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:37.661 true 00:26:37.661 16:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:37.661 16:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.037 16:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.295 16:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:39.295 16:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:39.295 true 00:26:39.295 16:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:39.295 16:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.231 16:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.489 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:40.489 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:40.489 true 00:26:40.489 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:40.489 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.748 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.006 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:41.006 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:41.265 true 00:26:41.265 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:41.265 16:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.201 16:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:42.459 16:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:42.459 16:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:42.718 true 00:26:42.718 16:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:42.718 16:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:43.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:43.665 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.665 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:43.665 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:43.923 true 00:26:43.923 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:43.923 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.182 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.182 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:44.182 16:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:44.441 true 00:26:44.441 16:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:44.441 16:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 16:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:45.818 16:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:45.818 16:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:46.077 true 00:26:46.077 16:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:46.077 16:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.013 16:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.013 16:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:47.013 16:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:47.271 true 00:26:47.271 16:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:47.271 16:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.529 16:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.788 16:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:47.788 16:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:47.788 true 00:26:47.788 16:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:47.788 16:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 16:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.164 16:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:49.164 16:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:49.423 true 00:26:49.423 16:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:49.423 16:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.359 16:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.359 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:50.359 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:50.618 true 00:26:50.618 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:50.618 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.877 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.877 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:50.877 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:51.136 true 00:26:51.136 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:51.136 16:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:52.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.072 16:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.331 16:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:52.331 16:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:52.589 true 00:26:52.589 16:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:52.589 16:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.525 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.525 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:53.525 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:53.783 true 00:26:53.783 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:53.783 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.041 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.299 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:54.299 16:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:54.557 true 00:26:54.557 16:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:54.557 16:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.493 16:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.751 16:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:55.751 16:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:56.010 true 00:26:56.010 16:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:56.010 16:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.945 16:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.945 16:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:56.945 16:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:57.204 true 00:26:57.204 16:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:57.204 16:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.493 16:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.815 16:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:57.815 16:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:57.815 true 00:26:57.815 16:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:57.815 16:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.816 16:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.074 16:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:59.074 16:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:59.074 true 00:26:59.332 16:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:59.332 16:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.333 16:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.591 16:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:59.591 16:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:59.849 true 00:26:59.849 16:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:26:59.849 16:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:00.784 16:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:00.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.043 16:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:01.043 16:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:01.301 true 00:27:01.301 16:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:27:01.301 16:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.237 16:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.237 Initializing NVMe Controllers 00:27:02.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.237 Controller IO queue size 128, less than required. 00:27:02.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.237 Controller IO queue size 128, less than required. 00:27:02.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.237 Initialization complete. Launching workers. 00:27:02.237 ======================================================== 00:27:02.237 Latency(us) 00:27:02.237 Device Information : IOPS MiB/s Average min max 00:27:02.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2005.73 0.98 43717.07 2599.96 1054898.16 00:27:02.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17791.73 8.69 7176.33 1526.29 447185.69 00:27:02.237 ======================================================== 00:27:02.237 Total : 19797.47 9.67 10878.37 1526.29 1054898.16 00:27:02.237 00:27:02.237 16:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:02.237 16:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:02.496 true 00:27:02.496 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2980117 00:27:02.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2980117) - No such process 00:27:02.496 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2980117 00:27:02.496 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:02.754 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:03.030 null0 00:27:03.030 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:03.030 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:03.030 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:03.295 null1 00:27:03.295 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:03.295 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:03.295 16:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:03.295 null2 00:27:03.295 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:03.295 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:03.296 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:03.554 null3 00:27:03.555 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:03.555 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:03.555 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:03.813 null4 00:27:03.813 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:03.813 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:03.813 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:04.072 null5 00:27:04.072 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:04.073 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:04.073 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:04.073 null6 00:27:04.073 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:04.073 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:04.073 16:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:04.332 null7 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.332 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2985671 2985673 2985674 2985676 2985678 2985680 2985682 2985683 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.333 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:04.592 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:04.852 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.112 16:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:05.371 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.631 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:05.890 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:05.891 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:06.150 16:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:06.409 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.668 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.927 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:06.928 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.187 16:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.446 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:07.705 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:07.965 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:07.965 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:07.965 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:07.966 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:08.226 16:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.485 rmmod nvme_tcp 00:27:08.485 rmmod nvme_fabrics 00:27:08.485 rmmod nvme_keyring 00:27:08.485 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2979851 ']' 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2979851 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2979851 ']' 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2979851 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2979851 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2979851' 00:27:08.486 killing process with pid 2979851 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2979851 00:27:08.486 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2979851 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.745 16:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.280 00:27:11.280 real 0m47.340s 00:27:11.280 user 3m0.016s 00:27:11.280 sys 0m19.831s 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:11.280 ************************************ 00:27:11.280 END TEST nvmf_ns_hotplug_stress 00:27:11.280 ************************************ 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:11.280 ************************************ 00:27:11.280 START TEST nvmf_delete_subsystem 00:27:11.280 ************************************ 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:11.280 * Looking for test storage... 00:27:11.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.280 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.281 --rc genhtml_branch_coverage=1 00:27:11.281 --rc genhtml_function_coverage=1 00:27:11.281 --rc genhtml_legend=1 00:27:11.281 --rc geninfo_all_blocks=1 00:27:11.281 --rc geninfo_unexecuted_blocks=1 00:27:11.281 00:27:11.281 ' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.281 --rc genhtml_branch_coverage=1 00:27:11.281 --rc genhtml_function_coverage=1 00:27:11.281 --rc genhtml_legend=1 00:27:11.281 --rc geninfo_all_blocks=1 00:27:11.281 --rc geninfo_unexecuted_blocks=1 00:27:11.281 00:27:11.281 ' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.281 --rc genhtml_branch_coverage=1 00:27:11.281 --rc genhtml_function_coverage=1 00:27:11.281 --rc genhtml_legend=1 00:27:11.281 --rc geninfo_all_blocks=1 00:27:11.281 --rc geninfo_unexecuted_blocks=1 00:27:11.281 00:27:11.281 ' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.281 --rc genhtml_branch_coverage=1 00:27:11.281 --rc genhtml_function_coverage=1 00:27:11.281 --rc genhtml_legend=1 00:27:11.281 --rc geninfo_all_blocks=1 00:27:11.281 --rc geninfo_unexecuted_blocks=1 00:27:11.281 00:27:11.281 ' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.281 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.282 16:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.551 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.551 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.551 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.552 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.552 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.552 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.552 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.552 16:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.552 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:27:16.553 00:27:16.553 --- 10.0.0.2 ping statistics --- 00:27:16.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.553 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:16.553 00:27:16.553 --- 10.0.0.1 ping statistics --- 00:27:16.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.553 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2989826 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2989826 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2989826 ']' 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.553 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.553 [2024-11-04 16:38:43.321517] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:16.553 [2024-11-04 16:38:43.322453] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:27:16.553 [2024-11-04 16:38:43.322488] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.812 [2024-11-04 16:38:43.393395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:16.812 [2024-11-04 16:38:43.434272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.812 [2024-11-04 16:38:43.434311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.812 [2024-11-04 16:38:43.434318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.812 [2024-11-04 16:38:43.434325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.812 [2024-11-04 16:38:43.434330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.812 [2024-11-04 16:38:43.435523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.812 [2024-11-04 16:38:43.435531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.812 [2024-11-04 16:38:43.502596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:16.812 [2024-11-04 16:38:43.502845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:16.812 [2024-11-04 16:38:43.502902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.812 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 [2024-11-04 16:38:43.568272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 [2024-11-04 16:38:43.592269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 NULL1 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 Delay0 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2989992 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:16.813 16:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:17.072 [2024-11-04 16:38:43.686985] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:18.974 16:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.974 16:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.974 16:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 [2024-11-04 16:38:45.806060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52cc000c40 is same with the state(6) to be set 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Read completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 Write completed with error (sct=0, sc=8) 00:27:19.234 starting I/O failed: -6 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Write completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Write completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 starting I/O failed: -6 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Write completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 starting I/O failed: -6 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Write completed with error (sct=0, sc=8) 00:27:19.235 starting I/O failed: -6 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 Read completed with error (sct=0, sc=8) 00:27:19.235 starting I/O failed: -6 00:27:20.171 [2024-11-04 16:38:46.782425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a09a0 is same with the state(6) to be set 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 [2024-11-04 16:38:46.808569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239f2c0 is same with the state(6) to be set 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Write completed with error (sct=0, sc=8) 00:27:20.171 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 [2024-11-04 16:38:46.808756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239f4a0 is same with the state(6) to be set 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 [2024-11-04 16:38:46.808905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239f860 is same with the state(6) to be set 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Write completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 Read completed with error (sct=0, sc=8) 00:27:20.172 [2024-11-04 16:38:46.809423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52cc00d350 is same with the state(6) to be set 00:27:20.172 Initializing NVMe Controllers 00:27:20.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.172 Controller IO queue size 128, less than required. 00:27:20.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:20.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:20.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:20.172 Initialization complete. Launching workers. 00:27:20.172 ======================================================== 00:27:20.172 Latency(us) 00:27:20.172 Device Information : IOPS MiB/s Average min max 00:27:20.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.49 0.09 945980.66 623.77 1012293.26 00:27:20.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.77 0.08 867298.01 323.59 1011233.14 00:27:20.172 ======================================================== 00:27:20.172 Total : 352.26 0.17 910739.69 323.59 1012293.26 00:27:20.172 00:27:20.172 [2024-11-04 16:38:46.810161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a09a0 (9): Bad file descriptor 00:27:20.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:20.172 16:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.172 16:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:20.172 16:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2989992 00:27:20.172 16:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2989992 00:27:20.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2989992) - No such process 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2989992 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2989992 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2989992 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:20.737 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:20.738 [2024-11-04 16:38:47.340550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2990533 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:20.738 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:20.738 [2024-11-04 16:38:47.407730] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:21.302 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:21.302 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:21.302 16:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:21.559 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:21.559 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:21.559 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:22.126 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:22.126 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:22.126 16:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:22.693 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:22.693 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:22.693 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:23.266 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:23.266 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:23.266 16:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:23.833 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:23.833 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:23.833 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:23.833 Initializing NVMe Controllers 00:27:23.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.833 Controller IO queue size 128, less than required. 00:27:23.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:23.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:23.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:23.833 Initialization complete. Launching workers. 00:27:23.833 ======================================================== 00:27:23.833 Latency(us) 00:27:23.833 Device Information : IOPS MiB/s Average min max 00:27:23.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002965.31 1000189.05 1009747.74 00:27:23.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005577.21 1000266.44 1042241.66 00:27:23.833 ======================================================== 00:27:23.833 Total : 256.00 0.12 1004271.26 1000189.05 1042241.66 00:27:23.833 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2990533 00:27:24.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2990533) - No such process 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2990533 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.092 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.092 rmmod nvme_tcp 00:27:24.092 rmmod nvme_fabrics 00:27:24.351 rmmod nvme_keyring 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2989826 ']' 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2989826 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2989826 ']' 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2989826 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.351 16:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989826 00:27:24.351 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:24.351 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:24.351 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989826' 00:27:24.351 killing process with pid 2989826 00:27:24.351 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2989826 00:27:24.351 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2989826 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.610 16:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.516 00:27:26.516 real 0m15.648s 00:27:26.516 user 0m26.143s 00:27:26.516 sys 0m5.647s 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.516 ************************************ 00:27:26.516 END TEST nvmf_delete_subsystem 00:27:26.516 ************************************ 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:26.516 ************************************ 00:27:26.516 START TEST nvmf_host_management 00:27:26.516 ************************************ 00:27:26.516 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:26.775 * Looking for test storage... 00:27:26.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.775 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.776 --rc genhtml_branch_coverage=1 00:27:26.776 --rc genhtml_function_coverage=1 00:27:26.776 --rc genhtml_legend=1 00:27:26.776 --rc geninfo_all_blocks=1 00:27:26.776 --rc geninfo_unexecuted_blocks=1 00:27:26.776 00:27:26.776 ' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.776 --rc genhtml_branch_coverage=1 00:27:26.776 --rc genhtml_function_coverage=1 00:27:26.776 --rc genhtml_legend=1 00:27:26.776 --rc geninfo_all_blocks=1 00:27:26.776 --rc geninfo_unexecuted_blocks=1 00:27:26.776 00:27:26.776 ' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.776 --rc genhtml_branch_coverage=1 00:27:26.776 --rc genhtml_function_coverage=1 00:27:26.776 --rc genhtml_legend=1 00:27:26.776 --rc geninfo_all_blocks=1 00:27:26.776 --rc geninfo_unexecuted_blocks=1 00:27:26.776 00:27:26.776 ' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:26.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.776 --rc genhtml_branch_coverage=1 00:27:26.776 --rc genhtml_function_coverage=1 00:27:26.776 --rc genhtml_legend=1 00:27:26.776 --rc geninfo_all_blocks=1 00:27:26.776 --rc geninfo_unexecuted_blocks=1 00:27:26.776 00:27:26.776 ' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.776 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.777 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.777 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.777 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.777 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.777 16:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:32.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:32.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:32.050 Found net devices under 0000:86:00.0: cvl_0_0 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:32.050 Found net devices under 0000:86:00.1: cvl_0_1 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.050 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:27:32.051 00:27:32.051 --- 10.0.0.2 ping statistics --- 00:27:32.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.051 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:32.051 00:27:32.051 --- 10.0.0.1 ping statistics --- 00:27:32.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.051 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2994522 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2994522 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2994522 ']' 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:32.051 [2024-11-04 16:38:58.555915] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:32.051 [2024-11-04 16:38:58.556824] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:27:32.051 [2024-11-04 16:38:58.556857] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.051 [2024-11-04 16:38:58.622811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.051 [2024-11-04 16:38:58.665017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.051 [2024-11-04 16:38:58.665053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.051 [2024-11-04 16:38:58.665061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.051 [2024-11-04 16:38:58.665067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.051 [2024-11-04 16:38:58.665072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.051 [2024-11-04 16:38:58.666658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.051 [2024-11-04 16:38:58.666747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.051 [2024-11-04 16:38:58.666854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.051 [2024-11-04 16:38:58.666854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:32.051 [2024-11-04 16:38:58.731956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:32.051 [2024-11-04 16:38:58.732158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:32.051 [2024-11-04 16:38:58.732561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:32.051 [2024-11-04 16:38:58.732625] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:32.051 [2024-11-04 16:38:58.732757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 [2024-11-04 16:38:58.787476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.051 Malloc0 00:27:32.051 [2024-11-04 16:38:58.855525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:32.051 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.310 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2994563 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2994563 /var/tmp/bdevperf.sock 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2994563 ']' 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:32.311 { 00:27:32.311 "params": { 00:27:32.311 "name": "Nvme$subsystem", 00:27:32.311 "trtype": "$TEST_TRANSPORT", 00:27:32.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.311 "adrfam": "ipv4", 00:27:32.311 "trsvcid": "$NVMF_PORT", 00:27:32.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.311 "hdgst": ${hdgst:-false}, 00:27:32.311 "ddgst": ${ddgst:-false} 00:27:32.311 }, 00:27:32.311 "method": "bdev_nvme_attach_controller" 00:27:32.311 } 00:27:32.311 EOF 00:27:32.311 )") 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:32.311 16:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:32.311 "params": { 00:27:32.311 "name": "Nvme0", 00:27:32.311 "trtype": "tcp", 00:27:32.311 "traddr": "10.0.0.2", 00:27:32.311 "adrfam": "ipv4", 00:27:32.311 "trsvcid": "4420", 00:27:32.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.311 "hdgst": false, 00:27:32.311 "ddgst": false 00:27:32.311 }, 00:27:32.311 "method": "bdev_nvme_attach_controller" 00:27:32.311 }' 00:27:32.311 [2024-11-04 16:38:58.950609] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:27:32.311 [2024-11-04 16:38:58.950655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994563 ] 00:27:32.311 [2024-11-04 16:38:59.014667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.311 [2024-11-04 16:38:59.055860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.570 Running I/O for 10 seconds... 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:27:32.570 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=655 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 655 -ge 100 ']' 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.830 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.830 [2024-11-04 16:38:59.607254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.607377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ceec0 is same with the state(6) to be set 00:27:32.831 [2024-11-04 16:38:59.611623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.831 [2024-11-04 16:38:59.611657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.611666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.831 [2024-11-04 16:38:59.611674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.611682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.831 [2024-11-04 16:38:59.611689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.611696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:32.831 [2024-11-04 16:38:59.611703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.611709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230d500 is same with the state(6) to be set 00:27:32.831 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.831 [2024-11-04 16:38:59.612413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:32.831 [2024-11-04 16:38:59.612643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.831 [2024-11-04 16:38:59.612815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.831 [2024-11-04 16:38:59.612822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.832 [2024-11-04 16:38:59.612961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.612983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.612992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:32.832 [2024-11-04 16:38:59.613245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.832 [2024-11-04 16:38:59.613390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.832 [2024-11-04 16:38:59.613396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.833 [2024-11-04 16:38:59.613404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-11-04 16:38:59.613411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.833 [2024-11-04 16:38:59.613419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-11-04 16:38:59.613426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.833 [2024-11-04 16:38:59.614358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:32.833 task offset: 98304 on job bdev=Nvme0n1 fails 00:27:32.833 00:27:32.833 Latency(us) 00:27:32.833 [2024-11-04T15:38:59.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.833 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:32.833 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:32.833 Verification LBA range: start 0x0 length 0x400 00:27:32.833 Nvme0n1 : 0.40 1904.98 119.06 158.75 0.00 30192.44 1404.34 26963.38 00:27:32.833 [2024-11-04T15:38:59.657Z] =================================================================================================================== 00:27:32.833 [2024-11-04T15:38:59.657Z] Total : 1904.98 119.06 158.75 0.00 30192.44 1404.34 26963.38 00:27:32.833 [2024-11-04 16:38:59.616709] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:32.833 [2024-11-04 16:38:59.616730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d500 (9): Bad file descriptor 00:27:32.833 [2024-11-04 16:38:59.617751] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:32.833 [2024-11-04 16:38:59.617823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:32.833 [2024-11-04 16:38:59.617846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.833 [2024-11-04 16:38:59.617860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:32.833 [2024-11-04 16:38:59.617868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:32.833 [2024-11-04 16:38:59.617875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.833 [2024-11-04 16:38:59.617883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x230d500 00:27:32.833 [2024-11-04 16:38:59.617901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d500 (9): Bad file descriptor 00:27:32.833 [2024-11-04 16:38:59.617913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:32.833 [2024-11-04 16:38:59.617921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:32.833 [2024-11-04 16:38:59.617932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:32.833 [2024-11-04 16:38:59.617940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:32.833 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.833 16:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2994563 00:27:34.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2994563) - No such process 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:34.208 { 00:27:34.208 "params": { 00:27:34.208 "name": "Nvme$subsystem", 00:27:34.208 "trtype": "$TEST_TRANSPORT", 00:27:34.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.208 "adrfam": "ipv4", 00:27:34.208 "trsvcid": "$NVMF_PORT", 00:27:34.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.208 "hdgst": ${hdgst:-false}, 00:27:34.208 "ddgst": ${ddgst:-false} 00:27:34.208 }, 00:27:34.208 "method": "bdev_nvme_attach_controller" 00:27:34.208 } 00:27:34.208 EOF 00:27:34.208 )") 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:34.208 16:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:34.208 "params": { 00:27:34.208 "name": "Nvme0", 00:27:34.208 "trtype": "tcp", 00:27:34.208 "traddr": "10.0.0.2", 00:27:34.208 "adrfam": "ipv4", 00:27:34.208 "trsvcid": "4420", 00:27:34.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.208 "hdgst": false, 00:27:34.208 "ddgst": false 00:27:34.208 }, 00:27:34.208 "method": "bdev_nvme_attach_controller" 00:27:34.208 }' 00:27:34.208 [2024-11-04 16:39:00.681155] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:27:34.208 [2024-11-04 16:39:00.681213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994939 ] 00:27:34.208 [2024-11-04 16:39:00.748121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.208 [2024-11-04 16:39:00.788979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.208 Running I/O for 1 seconds... 00:27:35.402 1984.00 IOPS, 124.00 MiB/s 00:27:35.402 Latency(us) 00:27:35.402 [2024-11-04T15:39:02.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.402 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.402 Verification LBA range: start 0x0 length 0x400 00:27:35.402 Nvme0n1 : 1.03 1996.60 124.79 0.00 0.00 31566.19 6272.73 26963.38 00:27:35.402 [2024-11-04T15:39:02.226Z] =================================================================================================================== 00:27:35.402 [2024-11-04T15:39:02.226Z] Total : 1996.60 124.79 0.00 0.00 31566.19 6272.73 26963.38 00:27:35.402 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:35.402 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:35.402 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:35.402 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:35.402 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.403 rmmod nvme_tcp 00:27:35.403 rmmod nvme_fabrics 00:27:35.403 rmmod nvme_keyring 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2994522 ']' 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2994522 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2994522 ']' 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2994522 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.403 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2994522 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2994522' 00:27:35.662 killing process with pid 2994522 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2994522 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2994522 00:27:35.662 [2024-11-04 16:39:02.426461] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.662 16:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:38.230 00:27:38.230 real 0m11.207s 00:27:38.230 user 0m16.807s 00:27:38.230 sys 0m5.519s 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 ************************************ 00:27:38.230 END TEST nvmf_host_management 00:27:38.230 ************************************ 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.230 ************************************ 00:27:38.230 START TEST nvmf_lvol 00:27:38.230 ************************************ 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:38.230 * Looking for test storage... 00:27:38.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.230 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:38.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.231 --rc genhtml_branch_coverage=1 00:27:38.231 --rc genhtml_function_coverage=1 00:27:38.231 --rc genhtml_legend=1 00:27:38.231 --rc geninfo_all_blocks=1 00:27:38.231 --rc geninfo_unexecuted_blocks=1 00:27:38.231 00:27:38.231 ' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:38.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.231 --rc genhtml_branch_coverage=1 00:27:38.231 --rc genhtml_function_coverage=1 00:27:38.231 --rc genhtml_legend=1 00:27:38.231 --rc geninfo_all_blocks=1 00:27:38.231 --rc geninfo_unexecuted_blocks=1 00:27:38.231 00:27:38.231 ' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:38.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.231 --rc genhtml_branch_coverage=1 00:27:38.231 --rc genhtml_function_coverage=1 00:27:38.231 --rc genhtml_legend=1 00:27:38.231 --rc geninfo_all_blocks=1 00:27:38.231 --rc geninfo_unexecuted_blocks=1 00:27:38.231 00:27:38.231 ' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:38.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.231 --rc genhtml_branch_coverage=1 00:27:38.231 --rc genhtml_function_coverage=1 00:27:38.231 --rc genhtml_legend=1 00:27:38.231 --rc geninfo_all_blocks=1 00:27:38.231 --rc geninfo_unexecuted_blocks=1 00:27:38.231 00:27:38.231 ' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.231 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.232 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.232 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.232 16:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:43.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:43.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:43.501 Found net devices under 0000:86:00.0: cvl_0_0 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:43.501 Found net devices under 0000:86:00.1: cvl_0_1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.501 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:27:43.502 00:27:43.502 --- 10.0.0.2 ping statistics --- 00:27:43.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.502 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:43.502 00:27:43.502 --- 10.0.0.1 ping statistics --- 00:27:43.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.502 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2999068 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2999068 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2999068 ']' 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.502 16:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:43.502 [2024-11-04 16:39:09.954125] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:43.502 [2024-11-04 16:39:09.955056] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:27:43.502 [2024-11-04 16:39:09.955089] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.502 [2024-11-04 16:39:10.024704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:43.502 [2024-11-04 16:39:10.074294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.502 [2024-11-04 16:39:10.074328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.502 [2024-11-04 16:39:10.074336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.502 [2024-11-04 16:39:10.074343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.502 [2024-11-04 16:39:10.074348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.502 [2024-11-04 16:39:10.075582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.502 [2024-11-04 16:39:10.075610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.502 [2024-11-04 16:39:10.075610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.502 [2024-11-04 16:39:10.143214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:43.502 [2024-11-04 16:39:10.143215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:43.502 [2024-11-04 16:39:10.143318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:43.502 [2024-11-04 16:39:10.143438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.502 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:43.760 [2024-11-04 16:39:10.372167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.760 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:44.019 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:44.019 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:44.019 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:44.019 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:44.277 16:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:44.535 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=734f0ec7-1465-4d4e-af2d-8ffb3e7b80e7 00:27:44.535 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 734f0ec7-1465-4d4e-af2d-8ffb3e7b80e7 lvol 20 00:27:44.793 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4f4716d6-e02f-4395-a654-bd4e027945dd 00:27:44.793 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:44.793 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f4716d6-e02f-4395-a654-bd4e027945dd 00:27:45.053 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.363 [2024-11-04 16:39:11.928304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.363 16:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.363 16:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:45.363 16:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2999539 00:27:45.363 16:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:46.358 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4f4716d6-e02f-4395-a654-bd4e027945dd MY_SNAPSHOT 00:27:46.616 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=92739079-dc20-4d27-9165-ce5b6ba4c616 00:27:46.616 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4f4716d6-e02f-4395-a654-bd4e027945dd 30 00:27:46.873 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 92739079-dc20-4d27-9165-ce5b6ba4c616 MY_CLONE 00:27:47.133 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8230b37f-7249-4c9d-84d4-d8a0c5dc2b79 00:27:47.133 16:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8230b37f-7249-4c9d-84d4-d8a0c5dc2b79 00:27:47.701 16:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2999539 00:27:55.815 Initializing NVMe Controllers 00:27:55.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:55.815 Controller IO queue size 128, less than required. 00:27:55.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:55.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:55.815 Initialization complete. Launching workers. 00:27:55.815 ======================================================== 00:27:55.815 Latency(us) 00:27:55.815 Device Information : IOPS MiB/s Average min max 00:27:55.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12432.10 48.56 10300.08 2106.77 52806.64 00:27:55.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12369.70 48.32 10347.61 3919.14 61250.77 00:27:55.815 ======================================================== 00:27:55.815 Total : 24801.80 96.88 10323.78 2106.77 61250.77 00:27:55.815 00:27:55.815 16:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.815 16:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f4716d6-e02f-4395-a654-bd4e027945dd 00:27:56.074 16:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 734f0ec7-1465-4d4e-af2d-8ffb3e7b80e7 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.332 rmmod nvme_tcp 00:27:56.332 rmmod nvme_fabrics 00:27:56.332 rmmod nvme_keyring 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2999068 ']' 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2999068 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2999068 ']' 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2999068 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.332 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999068 00:27:56.333 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:56.333 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:56.333 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999068' 00:27:56.333 killing process with pid 2999068 00:27:56.333 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2999068 00:27:56.333 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2999068 00:27:56.591 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.592 16:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.125 00:27:59.125 real 0m20.785s 00:27:59.125 user 0m54.965s 00:27:59.125 sys 0m9.010s 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:59.125 ************************************ 00:27:59.125 END TEST nvmf_lvol 00:27:59.125 ************************************ 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:59.125 ************************************ 00:27:59.125 START TEST nvmf_lvs_grow 00:27:59.125 ************************************ 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:59.125 * Looking for test storage... 00:27:59.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.125 --rc genhtml_branch_coverage=1 00:27:59.125 --rc genhtml_function_coverage=1 00:27:59.125 --rc genhtml_legend=1 00:27:59.125 --rc geninfo_all_blocks=1 00:27:59.125 --rc geninfo_unexecuted_blocks=1 00:27:59.125 00:27:59.125 ' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.125 --rc genhtml_branch_coverage=1 00:27:59.125 --rc genhtml_function_coverage=1 00:27:59.125 --rc genhtml_legend=1 00:27:59.125 --rc geninfo_all_blocks=1 00:27:59.125 --rc geninfo_unexecuted_blocks=1 00:27:59.125 00:27:59.125 ' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.125 --rc genhtml_branch_coverage=1 00:27:59.125 --rc genhtml_function_coverage=1 00:27:59.125 --rc genhtml_legend=1 00:27:59.125 --rc geninfo_all_blocks=1 00:27:59.125 --rc geninfo_unexecuted_blocks=1 00:27:59.125 00:27:59.125 ' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.125 --rc genhtml_branch_coverage=1 00:27:59.125 --rc genhtml_function_coverage=1 00:27:59.125 --rc genhtml_legend=1 00:27:59.125 --rc geninfo_all_blocks=1 00:27:59.125 --rc geninfo_unexecuted_blocks=1 00:27:59.125 00:27:59.125 ' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.125 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.126 16:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:04.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:04.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.393 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:04.394 Found net devices under 0000:86:00.0: cvl_0_0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:04.394 Found net devices under 0000:86:00.1: cvl_0_1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:04.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:28:04.394 00:28:04.394 --- 10.0.0.2 ping statistics --- 00:28:04.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.394 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:28:04.394 00:28:04.394 --- 10.0.0.1 ping statistics --- 00:28:04.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.394 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3004694 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3004694 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3004694 ']' 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:04.394 [2024-11-04 16:39:30.800236] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:04.394 [2024-11-04 16:39:30.801203] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:04.394 [2024-11-04 16:39:30.801239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.394 [2024-11-04 16:39:30.868620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.394 [2024-11-04 16:39:30.907790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.394 [2024-11-04 16:39:30.907826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.394 [2024-11-04 16:39:30.907834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.394 [2024-11-04 16:39:30.907840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.394 [2024-11-04 16:39:30.907844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.394 [2024-11-04 16:39:30.908386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.394 [2024-11-04 16:39:30.973968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:04.394 [2024-11-04 16:39:30.974181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:04.394 16:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:04.394 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.394 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:04.394 [2024-11-04 16:39:31.204871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.653 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:04.653 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.653 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:04.654 ************************************ 00:28:04.654 START TEST lvs_grow_clean 00:28:04.654 ************************************ 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:04.654 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:04.912 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:04.912 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:04.912 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:05.170 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:05.170 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:05.170 16:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1073de12-e54b-4d74-bb3e-08da53fedf9e lvol 150 00:28:05.429 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=be9c9ea9-fdf3-4d61-b6db-156888ca1de8 00:28:05.429 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:05.429 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:05.429 [2024-11-04 16:39:32.220753] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:05.429 [2024-11-04 16:39:32.220835] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:05.429 true 00:28:05.429 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:05.429 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:05.688 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:05.688 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:05.946 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be9c9ea9-fdf3-4d61-b6db-156888ca1de8 00:28:06.205 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.205 [2024-11-04 16:39:32.977076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.205 16:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3004998 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3004998 /var/tmp/bdevperf.sock 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3004998 ']' 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:06.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.464 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.464 [2024-11-04 16:39:33.231919] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:06.464 [2024-11-04 16:39:33.231966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004998 ] 00:28:06.722 [2024-11-04 16:39:33.295043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.722 [2024-11-04 16:39:33.336961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.722 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.722 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.722 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:06.980 Nvme0n1 00:28:06.980 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:07.239 [ 00:28:07.239 { 00:28:07.239 "name": "Nvme0n1", 00:28:07.239 "aliases": [ 00:28:07.239 "be9c9ea9-fdf3-4d61-b6db-156888ca1de8" 00:28:07.239 ], 00:28:07.239 "product_name": "NVMe disk", 00:28:07.239 "block_size": 4096, 00:28:07.239 "num_blocks": 38912, 00:28:07.239 "uuid": "be9c9ea9-fdf3-4d61-b6db-156888ca1de8", 00:28:07.239 "numa_id": 1, 00:28:07.239 "assigned_rate_limits": { 00:28:07.239 "rw_ios_per_sec": 0, 00:28:07.239 "rw_mbytes_per_sec": 0, 00:28:07.239 "r_mbytes_per_sec": 0, 00:28:07.239 "w_mbytes_per_sec": 0 00:28:07.239 }, 00:28:07.239 "claimed": false, 00:28:07.239 "zoned": false, 00:28:07.239 "supported_io_types": { 00:28:07.239 "read": true, 00:28:07.239 "write": true, 00:28:07.239 "unmap": true, 00:28:07.239 "flush": true, 00:28:07.239 "reset": true, 00:28:07.239 "nvme_admin": true, 00:28:07.239 "nvme_io": true, 00:28:07.239 "nvme_io_md": false, 00:28:07.239 "write_zeroes": true, 00:28:07.239 "zcopy": false, 00:28:07.239 "get_zone_info": false, 00:28:07.239 "zone_management": false, 00:28:07.239 "zone_append": false, 00:28:07.239 "compare": true, 00:28:07.239 "compare_and_write": true, 00:28:07.239 "abort": true, 00:28:07.239 "seek_hole": false, 00:28:07.239 "seek_data": false, 00:28:07.239 "copy": true, 00:28:07.239 "nvme_iov_md": false 00:28:07.239 }, 00:28:07.239 "memory_domains": [ 00:28:07.239 { 00:28:07.239 "dma_device_id": "system", 00:28:07.239 "dma_device_type": 1 00:28:07.239 } 00:28:07.239 ], 00:28:07.239 "driver_specific": { 00:28:07.239 "nvme": [ 00:28:07.239 { 00:28:07.239 "trid": { 00:28:07.239 "trtype": "TCP", 00:28:07.239 "adrfam": "IPv4", 00:28:07.239 "traddr": "10.0.0.2", 00:28:07.239 "trsvcid": "4420", 00:28:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:07.239 }, 00:28:07.239 "ctrlr_data": { 00:28:07.239 "cntlid": 1, 00:28:07.239 "vendor_id": "0x8086", 00:28:07.239 "model_number": "SPDK bdev Controller", 00:28:07.239 "serial_number": "SPDK0", 00:28:07.239 "firmware_revision": "25.01", 00:28:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.239 "oacs": { 00:28:07.239 "security": 0, 00:28:07.239 "format": 0, 00:28:07.239 "firmware": 0, 00:28:07.239 "ns_manage": 0 00:28:07.239 }, 00:28:07.239 "multi_ctrlr": true, 00:28:07.239 "ana_reporting": false 00:28:07.239 }, 00:28:07.239 "vs": { 00:28:07.239 "nvme_version": "1.3" 00:28:07.239 }, 00:28:07.239 "ns_data": { 00:28:07.239 "id": 1, 00:28:07.239 "can_share": true 00:28:07.239 } 00:28:07.239 } 00:28:07.239 ], 00:28:07.239 "mp_policy": "active_passive" 00:28:07.239 } 00:28:07.239 } 00:28:07.239 ] 00:28:07.239 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3005197 00:28:07.239 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:07.239 16:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:07.239 Running I/O for 10 seconds... 00:28:08.174 Latency(us) 00:28:08.174 [2024-11-04T15:39:34.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.174 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:28:08.174 [2024-11-04T15:39:34.998Z] =================================================================================================================== 00:28:08.174 [2024-11-04T15:39:34.998Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:28:08.174 00:28:09.109 16:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:09.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.368 Nvme0n1 : 2.00 23336.50 91.16 0.00 0.00 0.00 0.00 0.00 00:28:09.368 [2024-11-04T15:39:36.192Z] =================================================================================================================== 00:28:09.368 [2024-11-04T15:39:36.192Z] Total : 23336.50 91.16 0.00 0.00 0.00 0.00 0.00 00:28:09.368 00:28:09.368 true 00:28:09.368 16:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:09.368 16:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:09.626 16:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:09.626 16:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:09.626 16:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3005197 00:28:10.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.193 Nvme0n1 : 3.00 23384.33 91.35 0.00 0.00 0.00 0.00 0.00 00:28:10.193 [2024-11-04T15:39:37.017Z] =================================================================================================================== 00:28:10.193 [2024-11-04T15:39:37.017Z] Total : 23384.33 91.35 0.00 0.00 0.00 0.00 0.00 00:28:10.193 00:28:11.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.569 Nvme0n1 : 4.00 23475.50 91.70 0.00 0.00 0.00 0.00 0.00 00:28:11.569 [2024-11-04T15:39:38.393Z] =================================================================================================================== 00:28:11.569 [2024-11-04T15:39:38.393Z] Total : 23475.50 91.70 0.00 0.00 0.00 0.00 0.00 00:28:11.569 00:28:12.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.505 Nvme0n1 : 5.00 23530.20 91.91 0.00 0.00 0.00 0.00 0.00 00:28:12.505 [2024-11-04T15:39:39.329Z] =================================================================================================================== 00:28:12.505 [2024-11-04T15:39:39.329Z] Total : 23530.20 91.91 0.00 0.00 0.00 0.00 0.00 00:28:12.505 00:28:13.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:13.438 Nvme0n1 : 6.00 23566.67 92.06 0.00 0.00 0.00 0.00 0.00 00:28:13.438 [2024-11-04T15:39:40.262Z] =================================================================================================================== 00:28:13.438 [2024-11-04T15:39:40.262Z] Total : 23566.67 92.06 0.00 0.00 0.00 0.00 0.00 00:28:13.438 00:28:14.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.373 Nvme0n1 : 7.00 23502.00 91.80 0.00 0.00 0.00 0.00 0.00 00:28:14.373 [2024-11-04T15:39:41.197Z] =================================================================================================================== 00:28:14.373 [2024-11-04T15:39:41.197Z] Total : 23502.00 91.80 0.00 0.00 0.00 0.00 0.00 00:28:14.373 00:28:15.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:15.308 Nvme0n1 : 8.00 23532.88 91.93 0.00 0.00 0.00 0.00 0.00 00:28:15.308 [2024-11-04T15:39:42.132Z] =================================================================================================================== 00:28:15.308 [2024-11-04T15:39:42.132Z] Total : 23532.88 91.93 0.00 0.00 0.00 0.00 0.00 00:28:15.308 00:28:16.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.243 Nvme0n1 : 9.00 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:28:16.243 [2024-11-04T15:39:43.067Z] =================================================================================================================== 00:28:16.243 [2024-11-04T15:39:43.067Z] Total : 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:28:16.243 00:28:17.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.179 Nvme0n1 : 10.00 23595.20 92.17 0.00 0.00 0.00 0.00 0.00 00:28:17.179 [2024-11-04T15:39:44.003Z] =================================================================================================================== 00:28:17.179 [2024-11-04T15:39:44.003Z] Total : 23595.20 92.17 0.00 0.00 0.00 0.00 0.00 00:28:17.179 00:28:17.179 00:28:17.179 Latency(us) 00:28:17.179 [2024-11-04T15:39:44.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.179 Nvme0n1 : 10.00 23596.25 92.17 0.00 0.00 5421.34 3089.55 16103.13 00:28:17.179 [2024-11-04T15:39:44.003Z] =================================================================================================================== 00:28:17.179 [2024-11-04T15:39:44.003Z] Total : 23596.25 92.17 0.00 0.00 5421.34 3089.55 16103.13 00:28:17.179 { 00:28:17.179 "results": [ 00:28:17.179 { 00:28:17.179 "job": "Nvme0n1", 00:28:17.179 "core_mask": "0x2", 00:28:17.179 "workload": "randwrite", 00:28:17.179 "status": "finished", 00:28:17.179 "queue_depth": 128, 00:28:17.179 "io_size": 4096, 00:28:17.179 "runtime": 10.002266, 00:28:17.179 "iops": 23596.25308905002, 00:28:17.179 "mibps": 92.17286362910164, 00:28:17.179 "io_failed": 0, 00:28:17.179 "io_timeout": 0, 00:28:17.179 "avg_latency_us": 5421.343943493743, 00:28:17.179 "min_latency_us": 3089.554285714286, 00:28:17.179 "max_latency_us": 16103.131428571429 00:28:17.179 } 00:28:17.179 ], 00:28:17.179 "core_count": 1 00:28:17.179 } 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3004998 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3004998 ']' 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3004998 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004998 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004998' 00:28:17.438 killing process with pid 3004998 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3004998 00:28:17.438 Received shutdown signal, test time was about 10.000000 seconds 00:28:17.438 00:28:17.438 Latency(us) 00:28:17.438 [2024-11-04T15:39:44.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.438 [2024-11-04T15:39:44.262Z] =================================================================================================================== 00:28:17.438 [2024-11-04T15:39:44.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3004998 00:28:17.438 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.696 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.954 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:17.954 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:18.213 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:18.213 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:18.213 16:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:18.213 [2024-11-04 16:39:44.984886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:18.213 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:18.471 request: 00:28:18.471 { 00:28:18.471 "uuid": "1073de12-e54b-4d74-bb3e-08da53fedf9e", 00:28:18.471 "method": "bdev_lvol_get_lvstores", 00:28:18.471 "req_id": 1 00:28:18.471 } 00:28:18.471 Got JSON-RPC error response 00:28:18.471 response: 00:28:18.471 { 00:28:18.471 "code": -19, 00:28:18.471 "message": "No such device" 00:28:18.472 } 00:28:18.472 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:18.472 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.472 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.472 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.472 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:18.731 aio_bdev 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev be9c9ea9-fdf3-4d61-b6db-156888ca1de8 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=be9c9ea9-fdf3-4d61-b6db-156888ca1de8 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:18.731 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:18.990 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b be9c9ea9-fdf3-4d61-b6db-156888ca1de8 -t 2000 00:28:18.990 [ 00:28:18.990 { 00:28:18.990 "name": "be9c9ea9-fdf3-4d61-b6db-156888ca1de8", 00:28:18.990 "aliases": [ 00:28:18.990 "lvs/lvol" 00:28:18.990 ], 00:28:18.990 "product_name": "Logical Volume", 00:28:18.990 "block_size": 4096, 00:28:18.990 "num_blocks": 38912, 00:28:18.990 "uuid": "be9c9ea9-fdf3-4d61-b6db-156888ca1de8", 00:28:18.990 "assigned_rate_limits": { 00:28:18.990 "rw_ios_per_sec": 0, 00:28:18.990 "rw_mbytes_per_sec": 0, 00:28:18.990 "r_mbytes_per_sec": 0, 00:28:18.990 "w_mbytes_per_sec": 0 00:28:18.990 }, 00:28:18.990 "claimed": false, 00:28:18.990 "zoned": false, 00:28:18.990 "supported_io_types": { 00:28:18.990 "read": true, 00:28:18.990 "write": true, 00:28:18.990 "unmap": true, 00:28:18.990 "flush": false, 00:28:18.990 "reset": true, 00:28:18.990 "nvme_admin": false, 00:28:18.990 "nvme_io": false, 00:28:18.990 "nvme_io_md": false, 00:28:18.990 "write_zeroes": true, 00:28:18.990 "zcopy": false, 00:28:18.990 "get_zone_info": false, 00:28:18.990 "zone_management": false, 00:28:18.990 "zone_append": false, 00:28:18.990 "compare": false, 00:28:18.990 "compare_and_write": false, 00:28:18.990 "abort": false, 00:28:18.990 "seek_hole": true, 00:28:18.990 "seek_data": true, 00:28:18.990 "copy": false, 00:28:18.990 "nvme_iov_md": false 00:28:18.990 }, 00:28:18.990 "driver_specific": { 00:28:18.990 "lvol": { 00:28:18.990 "lvol_store_uuid": "1073de12-e54b-4d74-bb3e-08da53fedf9e", 00:28:18.990 "base_bdev": "aio_bdev", 00:28:18.990 "thin_provision": false, 00:28:18.990 "num_allocated_clusters": 38, 00:28:18.990 "snapshot": false, 00:28:18.990 "clone": false, 00:28:18.990 "esnap_clone": false 00:28:18.990 } 00:28:18.990 } 00:28:18.990 } 00:28:18.990 ] 00:28:18.990 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:19.248 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:19.248 16:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:19.248 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:19.248 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:19.248 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:19.507 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:19.507 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be9c9ea9-fdf3-4d61-b6db-156888ca1de8 00:28:19.765 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1073de12-e54b-4d74-bb3e-08da53fedf9e 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:20.024 00:28:20.024 real 0m15.555s 00:28:20.024 user 0m15.082s 00:28:20.024 sys 0m1.451s 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.024 ************************************ 00:28:20.024 END TEST lvs_grow_clean 00:28:20.024 ************************************ 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:20.024 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:20.283 ************************************ 00:28:20.283 START TEST lvs_grow_dirty 00:28:20.283 ************************************ 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:20.283 16:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:20.542 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:20.542 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:20.542 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=37edac6f-bb78-4248-976a-534c496926dc 00:28:20.542 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:20.542 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:20.800 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:20.800 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:20.800 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37edac6f-bb78-4248-976a-534c496926dc lvol 150 00:28:21.058 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:21.058 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:21.058 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:21.058 [2024-11-04 16:39:47.860815] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:21.058 [2024-11-04 16:39:47.860949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:21.058 true 00:28:21.058 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:21.058 16:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:21.346 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:21.346 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:21.657 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:21.657 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.914 [2024-11-04 16:39:48.605257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.915 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3007561 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3007561 /var/tmp/bdevperf.sock 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3007561 ']' 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.173 16:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:22.173 [2024-11-04 16:39:48.858058] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:22.173 [2024-11-04 16:39:48.858108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007561 ] 00:28:22.173 [2024-11-04 16:39:48.921378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.173 [2024-11-04 16:39:48.963279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.431 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.431 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:22.431 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:22.688 Nvme0n1 00:28:22.688 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:22.951 [ 00:28:22.951 { 00:28:22.951 "name": "Nvme0n1", 00:28:22.951 "aliases": [ 00:28:22.951 "26c1415e-8ce5-4e9f-a60b-87ecef2c0997" 00:28:22.951 ], 00:28:22.951 "product_name": "NVMe disk", 00:28:22.951 "block_size": 4096, 00:28:22.951 "num_blocks": 38912, 00:28:22.952 "uuid": "26c1415e-8ce5-4e9f-a60b-87ecef2c0997", 00:28:22.952 "numa_id": 1, 00:28:22.952 "assigned_rate_limits": { 00:28:22.952 "rw_ios_per_sec": 0, 00:28:22.952 "rw_mbytes_per_sec": 0, 00:28:22.952 "r_mbytes_per_sec": 0, 00:28:22.952 "w_mbytes_per_sec": 0 00:28:22.952 }, 00:28:22.952 "claimed": false, 00:28:22.952 "zoned": false, 00:28:22.952 "supported_io_types": { 00:28:22.952 "read": true, 00:28:22.952 "write": true, 00:28:22.952 "unmap": true, 00:28:22.952 "flush": true, 00:28:22.952 "reset": true, 00:28:22.952 "nvme_admin": true, 00:28:22.952 "nvme_io": true, 00:28:22.952 "nvme_io_md": false, 00:28:22.952 "write_zeroes": true, 00:28:22.952 "zcopy": false, 00:28:22.952 "get_zone_info": false, 00:28:22.952 "zone_management": false, 00:28:22.952 "zone_append": false, 00:28:22.952 "compare": true, 00:28:22.952 "compare_and_write": true, 00:28:22.952 "abort": true, 00:28:22.952 "seek_hole": false, 00:28:22.952 "seek_data": false, 00:28:22.952 "copy": true, 00:28:22.952 "nvme_iov_md": false 00:28:22.952 }, 00:28:22.952 "memory_domains": [ 00:28:22.952 { 00:28:22.952 "dma_device_id": "system", 00:28:22.952 "dma_device_type": 1 00:28:22.952 } 00:28:22.952 ], 00:28:22.952 "driver_specific": { 00:28:22.952 "nvme": [ 00:28:22.952 { 00:28:22.952 "trid": { 00:28:22.952 "trtype": "TCP", 00:28:22.952 "adrfam": "IPv4", 00:28:22.952 "traddr": "10.0.0.2", 00:28:22.952 "trsvcid": "4420", 00:28:22.952 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:22.952 }, 00:28:22.952 "ctrlr_data": { 00:28:22.952 "cntlid": 1, 00:28:22.952 "vendor_id": "0x8086", 00:28:22.952 "model_number": "SPDK bdev Controller", 00:28:22.952 "serial_number": "SPDK0", 00:28:22.952 "firmware_revision": "25.01", 00:28:22.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.953 "oacs": { 00:28:22.953 "security": 0, 00:28:22.953 "format": 0, 00:28:22.953 "firmware": 0, 00:28:22.953 "ns_manage": 0 00:28:22.953 }, 00:28:22.953 "multi_ctrlr": true, 00:28:22.953 "ana_reporting": false 00:28:22.953 }, 00:28:22.953 "vs": { 00:28:22.953 "nvme_version": "1.3" 00:28:22.953 }, 00:28:22.953 "ns_data": { 00:28:22.953 "id": 1, 00:28:22.953 "can_share": true 00:28:22.953 } 00:28:22.953 } 00:28:22.953 ], 00:28:22.953 "mp_policy": "active_passive" 00:28:22.953 } 00:28:22.953 } 00:28:22.953 ] 00:28:22.953 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3007789 00:28:22.953 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:22.953 16:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:23.216 Running I/O for 10 seconds... 00:28:24.152 Latency(us) 00:28:24.152 [2024-11-04T15:39:50.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.152 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:28:24.152 [2024-11-04T15:39:50.976Z] =================================================================================================================== 00:28:24.152 [2024-11-04T15:39:50.976Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:28:24.152 00:28:25.085 16:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:25.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.085 Nvme0n1 : 2.00 23273.00 90.91 0.00 0.00 0.00 0.00 0.00 00:28:25.085 [2024-11-04T15:39:51.909Z] =================================================================================================================== 00:28:25.085 [2024-11-04T15:39:51.909Z] Total : 23273.00 90.91 0.00 0.00 0.00 0.00 0.00 00:28:25.085 00:28:25.085 true 00:28:25.085 16:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:25.085 16:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:25.343 16:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:25.343 16:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:25.343 16:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3007789 00:28:26.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.278 Nvme0n1 : 3.00 23347.00 91.20 0.00 0.00 0.00 0.00 0.00 00:28:26.278 [2024-11-04T15:39:53.102Z] =================================================================================================================== 00:28:26.278 [2024-11-04T15:39:53.102Z] Total : 23347.00 91.20 0.00 0.00 0.00 0.00 0.00 00:28:26.278 00:28:27.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.212 Nvme0n1 : 4.00 23447.50 91.59 0.00 0.00 0.00 0.00 0.00 00:28:27.212 [2024-11-04T15:39:54.036Z] =================================================================================================================== 00:28:27.212 [2024-11-04T15:39:54.036Z] Total : 23447.50 91.59 0.00 0.00 0.00 0.00 0.00 00:28:27.212 00:28:28.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.147 Nvme0n1 : 5.00 23507.80 91.83 0.00 0.00 0.00 0.00 0.00 00:28:28.147 [2024-11-04T15:39:54.971Z] =================================================================================================================== 00:28:28.148 [2024-11-04T15:39:54.972Z] Total : 23507.80 91.83 0.00 0.00 0.00 0.00 0.00 00:28:28.148 00:28:29.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.083 Nvme0n1 : 6.00 23484.50 91.74 0.00 0.00 0.00 0.00 0.00 00:28:29.083 [2024-11-04T15:39:55.907Z] =================================================================================================================== 00:28:29.083 [2024-11-04T15:39:55.907Z] Total : 23484.50 91.74 0.00 0.00 0.00 0.00 0.00 00:28:29.083 00:28:30.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.019 Nvme0n1 : 7.00 23522.29 91.88 0.00 0.00 0.00 0.00 0.00 00:28:30.019 [2024-11-04T15:39:56.843Z] =================================================================================================================== 00:28:30.019 [2024-11-04T15:39:56.843Z] Total : 23522.29 91.88 0.00 0.00 0.00 0.00 0.00 00:28:30.019 00:28:31.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:31.394 Nvme0n1 : 8.00 23550.62 91.99 0.00 0.00 0.00 0.00 0.00 00:28:31.394 [2024-11-04T15:39:58.218Z] =================================================================================================================== 00:28:31.394 [2024-11-04T15:39:58.218Z] Total : 23550.62 91.99 0.00 0.00 0.00 0.00 0.00 00:28:31.394 00:28:32.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.330 Nvme0n1 : 9.00 23574.56 92.09 0.00 0.00 0.00 0.00 0.00 00:28:32.330 [2024-11-04T15:39:59.154Z] =================================================================================================================== 00:28:32.330 [2024-11-04T15:39:59.154Z] Total : 23574.56 92.09 0.00 0.00 0.00 0.00 0.00 00:28:32.330 00:28:33.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.266 Nvme0n1 : 10.00 23592.00 92.16 0.00 0.00 0.00 0.00 0.00 00:28:33.266 [2024-11-04T15:40:00.090Z] =================================================================================================================== 00:28:33.266 [2024-11-04T15:40:00.090Z] Total : 23592.00 92.16 0.00 0.00 0.00 0.00 0.00 00:28:33.266 00:28:33.266 00:28:33.266 Latency(us) 00:28:33.266 [2024-11-04T15:40:00.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.266 Nvme0n1 : 10.00 23596.35 92.17 0.00 0.00 5421.72 3183.18 15978.30 00:28:33.266 [2024-11-04T15:40:00.090Z] =================================================================================================================== 00:28:33.266 [2024-11-04T15:40:00.090Z] Total : 23596.35 92.17 0.00 0.00 5421.72 3183.18 15978.30 00:28:33.266 { 00:28:33.266 "results": [ 00:28:33.266 { 00:28:33.266 "job": "Nvme0n1", 00:28:33.266 "core_mask": "0x2", 00:28:33.266 "workload": "randwrite", 00:28:33.266 "status": "finished", 00:28:33.266 "queue_depth": 128, 00:28:33.266 "io_size": 4096, 00:28:33.266 "runtime": 10.003581, 00:28:33.266 "iops": 23596.350147012356, 00:28:33.266 "mibps": 92.17324276176701, 00:28:33.266 "io_failed": 0, 00:28:33.266 "io_timeout": 0, 00:28:33.266 "avg_latency_us": 5421.715219438823, 00:28:33.266 "min_latency_us": 3183.177142857143, 00:28:33.266 "max_latency_us": 15978.300952380952 00:28:33.266 } 00:28:33.266 ], 00:28:33.266 "core_count": 1 00:28:33.266 } 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3007561 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3007561 ']' 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3007561 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007561 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007561' 00:28:33.267 killing process with pid 3007561 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3007561 00:28:33.267 Received shutdown signal, test time was about 10.000000 seconds 00:28:33.267 00:28:33.267 Latency(us) 00:28:33.267 [2024-11-04T15:40:00.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.267 [2024-11-04T15:40:00.091Z] =================================================================================================================== 00:28:33.267 [2024-11-04T15:40:00.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.267 16:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3007561 00:28:33.267 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.525 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.784 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:33.784 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3004694 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3004694 00:28:34.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3004694 Killed "${NVMF_APP[@]}" "$@" 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3009578 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3009578 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3009578 ']' 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.043 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:34.043 [2024-11-04 16:40:00.752316] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:34.043 [2024-11-04 16:40:00.753267] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:34.043 [2024-11-04 16:40:00.753310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.043 [2024-11-04 16:40:00.821389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.043 [2024-11-04 16:40:00.861880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.043 [2024-11-04 16:40:00.861915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.043 [2024-11-04 16:40:00.861922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.043 [2024-11-04 16:40:00.861928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.043 [2024-11-04 16:40:00.861933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.043 [2024-11-04 16:40:00.862502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.302 [2024-11-04 16:40:00.929654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:34.302 [2024-11-04 16:40:00.929876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.302 16:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:34.561 [2024-11-04 16:40:01.157865] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:34.561 [2024-11-04 16:40:01.157965] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:34.561 [2024-11-04 16:40:01.158007] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:34.561 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 -t 2000 00:28:34.819 [ 00:28:34.819 { 00:28:34.819 "name": "26c1415e-8ce5-4e9f-a60b-87ecef2c0997", 00:28:34.819 "aliases": [ 00:28:34.819 "lvs/lvol" 00:28:34.819 ], 00:28:34.819 "product_name": "Logical Volume", 00:28:34.819 "block_size": 4096, 00:28:34.819 "num_blocks": 38912, 00:28:34.819 "uuid": "26c1415e-8ce5-4e9f-a60b-87ecef2c0997", 00:28:34.819 "assigned_rate_limits": { 00:28:34.819 "rw_ios_per_sec": 0, 00:28:34.819 "rw_mbytes_per_sec": 0, 00:28:34.819 "r_mbytes_per_sec": 0, 00:28:34.819 "w_mbytes_per_sec": 0 00:28:34.819 }, 00:28:34.819 "claimed": false, 00:28:34.819 "zoned": false, 00:28:34.819 "supported_io_types": { 00:28:34.819 "read": true, 00:28:34.819 "write": true, 00:28:34.819 "unmap": true, 00:28:34.819 "flush": false, 00:28:34.819 "reset": true, 00:28:34.819 "nvme_admin": false, 00:28:34.819 "nvme_io": false, 00:28:34.819 "nvme_io_md": false, 00:28:34.819 "write_zeroes": true, 00:28:34.819 "zcopy": false, 00:28:34.819 "get_zone_info": false, 00:28:34.819 "zone_management": false, 00:28:34.819 "zone_append": false, 00:28:34.819 "compare": false, 00:28:34.819 "compare_and_write": false, 00:28:34.819 "abort": false, 00:28:34.819 "seek_hole": true, 00:28:34.819 "seek_data": true, 00:28:34.819 "copy": false, 00:28:34.819 "nvme_iov_md": false 00:28:34.819 }, 00:28:34.819 "driver_specific": { 00:28:34.819 "lvol": { 00:28:34.819 "lvol_store_uuid": "37edac6f-bb78-4248-976a-534c496926dc", 00:28:34.819 "base_bdev": "aio_bdev", 00:28:34.819 "thin_provision": false, 00:28:34.819 "num_allocated_clusters": 38, 00:28:34.819 "snapshot": false, 00:28:34.819 "clone": false, 00:28:34.819 "esnap_clone": false 00:28:34.819 } 00:28:34.819 } 00:28:34.819 } 00:28:34.819 ] 00:28:34.819 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:34.819 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:34.819 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:35.078 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:35.078 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:35.078 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:35.335 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:35.335 16:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:35.335 [2024-11-04 16:40:02.102874] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:35.335 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:35.593 request: 00:28:35.593 { 00:28:35.593 "uuid": "37edac6f-bb78-4248-976a-534c496926dc", 00:28:35.593 "method": "bdev_lvol_get_lvstores", 00:28:35.593 "req_id": 1 00:28:35.593 } 00:28:35.593 Got JSON-RPC error response 00:28:35.593 response: 00:28:35.593 { 00:28:35.593 "code": -19, 00:28:35.593 "message": "No such device" 00:28:35.593 } 00:28:35.593 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:35.593 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.593 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.593 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.593 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:35.852 aio_bdev 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:35.852 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:36.109 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 -t 2000 00:28:36.109 [ 00:28:36.109 { 00:28:36.110 "name": "26c1415e-8ce5-4e9f-a60b-87ecef2c0997", 00:28:36.110 "aliases": [ 00:28:36.110 "lvs/lvol" 00:28:36.110 ], 00:28:36.110 "product_name": "Logical Volume", 00:28:36.110 "block_size": 4096, 00:28:36.110 "num_blocks": 38912, 00:28:36.110 "uuid": "26c1415e-8ce5-4e9f-a60b-87ecef2c0997", 00:28:36.110 "assigned_rate_limits": { 00:28:36.110 "rw_ios_per_sec": 0, 00:28:36.110 "rw_mbytes_per_sec": 0, 00:28:36.110 "r_mbytes_per_sec": 0, 00:28:36.110 "w_mbytes_per_sec": 0 00:28:36.110 }, 00:28:36.110 "claimed": false, 00:28:36.110 "zoned": false, 00:28:36.110 "supported_io_types": { 00:28:36.110 "read": true, 00:28:36.110 "write": true, 00:28:36.110 "unmap": true, 00:28:36.110 "flush": false, 00:28:36.110 "reset": true, 00:28:36.110 "nvme_admin": false, 00:28:36.110 "nvme_io": false, 00:28:36.110 "nvme_io_md": false, 00:28:36.110 "write_zeroes": true, 00:28:36.110 "zcopy": false, 00:28:36.110 "get_zone_info": false, 00:28:36.110 "zone_management": false, 00:28:36.110 "zone_append": false, 00:28:36.110 "compare": false, 00:28:36.110 "compare_and_write": false, 00:28:36.110 "abort": false, 00:28:36.110 "seek_hole": true, 00:28:36.110 "seek_data": true, 00:28:36.110 "copy": false, 00:28:36.110 "nvme_iov_md": false 00:28:36.110 }, 00:28:36.110 "driver_specific": { 00:28:36.110 "lvol": { 00:28:36.110 "lvol_store_uuid": "37edac6f-bb78-4248-976a-534c496926dc", 00:28:36.110 "base_bdev": "aio_bdev", 00:28:36.110 "thin_provision": false, 00:28:36.110 "num_allocated_clusters": 38, 00:28:36.110 "snapshot": false, 00:28:36.110 "clone": false, 00:28:36.110 "esnap_clone": false 00:28:36.110 } 00:28:36.110 } 00:28:36.110 } 00:28:36.110 ] 00:28:36.110 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:36.110 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:36.110 16:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:36.368 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:36.368 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:36.368 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:36.625 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:36.625 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26c1415e-8ce5-4e9f-a60b-87ecef2c0997 00:28:36.883 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37edac6f-bb78-4248-976a-534c496926dc 00:28:37.141 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:37.141 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:37.141 00:28:37.141 real 0m17.058s 00:28:37.141 user 0m34.406s 00:28:37.141 sys 0m3.801s 00:28:37.141 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.141 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:37.141 ************************************ 00:28:37.141 END TEST lvs_grow_dirty 00:28:37.141 ************************************ 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:37.399 16:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:37.399 nvmf_trace.0 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.399 rmmod nvme_tcp 00:28:37.399 rmmod nvme_fabrics 00:28:37.399 rmmod nvme_keyring 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3009578 ']' 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3009578 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3009578 ']' 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3009578 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009578 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009578' 00:28:37.399 killing process with pid 3009578 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3009578 00:28:37.399 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3009578 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.657 16:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.558 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.558 00:28:39.558 real 0m40.909s 00:28:39.558 user 0m51.548s 00:28:39.558 sys 0m9.486s 00:28:39.558 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.558 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:39.558 ************************************ 00:28:39.558 END TEST nvmf_lvs_grow 00:28:39.558 ************************************ 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.816 ************************************ 00:28:39.816 START TEST nvmf_bdev_io_wait 00:28:39.816 ************************************ 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:39.816 * Looking for test storage... 00:28:39.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:39.816 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.817 --rc genhtml_branch_coverage=1 00:28:39.817 --rc genhtml_function_coverage=1 00:28:39.817 --rc genhtml_legend=1 00:28:39.817 --rc geninfo_all_blocks=1 00:28:39.817 --rc geninfo_unexecuted_blocks=1 00:28:39.817 00:28:39.817 ' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.817 --rc genhtml_branch_coverage=1 00:28:39.817 --rc genhtml_function_coverage=1 00:28:39.817 --rc genhtml_legend=1 00:28:39.817 --rc geninfo_all_blocks=1 00:28:39.817 --rc geninfo_unexecuted_blocks=1 00:28:39.817 00:28:39.817 ' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.817 --rc genhtml_branch_coverage=1 00:28:39.817 --rc genhtml_function_coverage=1 00:28:39.817 --rc genhtml_legend=1 00:28:39.817 --rc geninfo_all_blocks=1 00:28:39.817 --rc geninfo_unexecuted_blocks=1 00:28:39.817 00:28:39.817 ' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.817 --rc genhtml_branch_coverage=1 00:28:39.817 --rc genhtml_function_coverage=1 00:28:39.817 --rc genhtml_legend=1 00:28:39.817 --rc geninfo_all_blocks=1 00:28:39.817 --rc geninfo_unexecuted_blocks=1 00:28:39.817 00:28:39.817 ' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.817 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.075 16:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:45.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.337 16:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:45.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:45.337 Found net devices under 0000:86:00.0: cvl_0_0 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:45.337 Found net devices under 0000:86:00.1: cvl_0_1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.337 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:28:45.595 00:28:45.595 --- 10.0.0.2 ping statistics --- 00:28:45.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.595 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:28:45.595 00:28:45.595 --- 10.0.0.1 ping statistics --- 00:28:45.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.595 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3013663 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3013663 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3013663 ']' 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.595 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.595 [2024-11-04 16:40:12.333221] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.595 [2024-11-04 16:40:12.334185] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:45.595 [2024-11-04 16:40:12.334220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.595 [2024-11-04 16:40:12.401066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.854 [2024-11-04 16:40:12.445171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.854 [2024-11-04 16:40:12.445207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.854 [2024-11-04 16:40:12.445216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.854 [2024-11-04 16:40:12.445224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.854 [2024-11-04 16:40:12.445230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.854 [2024-11-04 16:40:12.446815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.854 [2024-11-04 16:40:12.446912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.854 [2024-11-04 16:40:12.447002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.854 [2024-11-04 16:40:12.447005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.854 [2024-11-04 16:40:12.447331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 [2024-11-04 16:40:12.574711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.854 [2024-11-04 16:40:12.575446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:45.854 [2024-11-04 16:40:12.575784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.854 [2024-11-04 16:40:12.575914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 [2024-11-04 16:40:12.583445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.854 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.854 Malloc0 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:45.855 [2024-11-04 16:40:12.635725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3013689 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3013691 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.855 { 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme$subsystem", 00:28:45.855 "trtype": "$TEST_TRANSPORT", 00:28:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "$NVMF_PORT", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.855 "hdgst": ${hdgst:-false}, 00:28:45.855 "ddgst": ${ddgst:-false} 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 } 00:28:45.855 EOF 00:28:45.855 )") 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3013693 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3013696 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.855 { 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme$subsystem", 00:28:45.855 "trtype": "$TEST_TRANSPORT", 00:28:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "$NVMF_PORT", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.855 "hdgst": ${hdgst:-false}, 00:28:45.855 "ddgst": ${ddgst:-false} 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 } 00:28:45.855 EOF 00:28:45.855 )") 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.855 { 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme$subsystem", 00:28:45.855 "trtype": "$TEST_TRANSPORT", 00:28:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "$NVMF_PORT", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.855 "hdgst": ${hdgst:-false}, 00:28:45.855 "ddgst": ${ddgst:-false} 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 } 00:28:45.855 EOF 00:28:45.855 )") 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.855 { 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme$subsystem", 00:28:45.855 "trtype": "$TEST_TRANSPORT", 00:28:45.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "$NVMF_PORT", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.855 "hdgst": ${hdgst:-false}, 00:28:45.855 "ddgst": ${ddgst:-false} 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 } 00:28:45.855 EOF 00:28:45.855 )") 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3013689 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme1", 00:28:45.855 "trtype": "tcp", 00:28:45.855 "traddr": "10.0.0.2", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "4420", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.855 "hdgst": false, 00:28:45.855 "ddgst": false 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 }' 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme1", 00:28:45.855 "trtype": "tcp", 00:28:45.855 "traddr": "10.0.0.2", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "4420", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.855 "hdgst": false, 00:28:45.855 "ddgst": false 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 }' 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:45.855 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.855 "params": { 00:28:45.855 "name": "Nvme1", 00:28:45.855 "trtype": "tcp", 00:28:45.855 "traddr": "10.0.0.2", 00:28:45.855 "adrfam": "ipv4", 00:28:45.855 "trsvcid": "4420", 00:28:45.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.855 "hdgst": false, 00:28:45.855 "ddgst": false 00:28:45.855 }, 00:28:45.855 "method": "bdev_nvme_attach_controller" 00:28:45.855 }' 00:28:45.856 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:45.856 16:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.856 "params": { 00:28:45.856 "name": "Nvme1", 00:28:45.856 "trtype": "tcp", 00:28:45.856 "traddr": "10.0.0.2", 00:28:45.856 "adrfam": "ipv4", 00:28:45.856 "trsvcid": "4420", 00:28:45.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.856 "hdgst": false, 00:28:45.856 "ddgst": false 00:28:45.856 }, 00:28:45.856 "method": "bdev_nvme_attach_controller" 00:28:45.856 }' 00:28:46.114 [2024-11-04 16:40:12.687411] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:46.114 [2024-11-04 16:40:12.687462] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:46.114 [2024-11-04 16:40:12.687479] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:46.114 [2024-11-04 16:40:12.687522] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:28:46.114 [2024-11-04 16:40:12.689465] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:46.114 [2024-11-04 16:40:12.689464] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:46.114 [2024-11-04 16:40:12.689514] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-04 16:40:12.689515] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:46.114 --proc-type=auto ] 00:28:46.114 [2024-11-04 16:40:12.883381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.114 [2024-11-04 16:40:12.926309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:46.372 [2024-11-04 16:40:12.977093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.372 [2024-11-04 16:40:13.024200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.372 [2024-11-04 16:40:13.027163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.372 [2024-11-04 16:40:13.069548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:46.372 [2024-11-04 16:40:13.084961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.372 [2024-11-04 16:40:13.124675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:46.629 Running I/O for 1 seconds... 00:28:46.629 Running I/O for 1 seconds... 00:28:46.629 Running I/O for 1 seconds... 00:28:46.629 Running I/O for 1 seconds... 00:28:47.559 14529.00 IOPS, 56.75 MiB/s 00:28:47.559 Latency(us) 00:28:47.559 [2024-11-04T15:40:14.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.559 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:47.559 Nvme1n1 : 1.01 14575.76 56.94 0.00 0.00 8755.72 3448.44 10236.10 00:28:47.559 [2024-11-04T15:40:14.383Z] =================================================================================================================== 00:28:47.559 [2024-11-04T15:40:14.383Z] Total : 14575.76 56.94 0.00 0.00 8755.72 3448.44 10236.10 00:28:47.559 6839.00 IOPS, 26.71 MiB/s 00:28:47.559 Latency(us) 00:28:47.559 [2024-11-04T15:40:14.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.560 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:47.560 Nvme1n1 : 1.01 6883.96 26.89 0.00 0.00 18492.57 4431.48 22719.15 00:28:47.560 [2024-11-04T15:40:14.384Z] =================================================================================================================== 00:28:47.560 [2024-11-04T15:40:14.384Z] Total : 6883.96 26.89 0.00 0.00 18492.57 4431.48 22719.15 00:28:47.560 252200.00 IOPS, 985.16 MiB/s 00:28:47.560 Latency(us) 00:28:47.560 [2024-11-04T15:40:14.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.560 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:47.560 Nvme1n1 : 1.00 251817.68 983.66 0.00 0.00 506.04 226.26 1505.77 00:28:47.560 [2024-11-04T15:40:14.384Z] =================================================================================================================== 00:28:47.560 [2024-11-04T15:40:14.384Z] Total : 251817.68 983.66 0.00 0.00 506.04 226.26 1505.77 00:28:47.560 7017.00 IOPS, 27.41 MiB/s 00:28:47.560 Latency(us) 00:28:47.560 [2024-11-04T15:40:14.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.560 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:47.560 Nvme1n1 : 1.01 7137.80 27.88 0.00 0.00 17888.02 3822.93 34702.87 00:28:47.560 [2024-11-04T15:40:14.384Z] =================================================================================================================== 00:28:47.560 [2024-11-04T15:40:14.384Z] Total : 7137.80 27.88 0.00 0.00 17888.02 3822.93 34702.87 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3013691 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3013693 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3013696 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.818 rmmod nvme_tcp 00:28:47.818 rmmod nvme_fabrics 00:28:47.818 rmmod nvme_keyring 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3013663 ']' 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3013663 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3013663 ']' 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3013663 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013663 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013663' 00:28:47.818 killing process with pid 3013663 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3013663 00:28:47.818 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3013663 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.076 16:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.976 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.976 00:28:49.976 real 0m10.347s 00:28:49.976 user 0m14.470s 00:28:49.976 sys 0m6.220s 00:28:49.976 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.976 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:49.976 ************************************ 00:28:49.976 END TEST nvmf_bdev_io_wait 00:28:49.976 ************************************ 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:50.235 ************************************ 00:28:50.235 START TEST nvmf_queue_depth 00:28:50.235 ************************************ 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:50.235 * Looking for test storage... 00:28:50.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:28:50.235 16:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:50.235 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:50.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.236 --rc genhtml_branch_coverage=1 00:28:50.236 --rc genhtml_function_coverage=1 00:28:50.236 --rc genhtml_legend=1 00:28:50.236 --rc geninfo_all_blocks=1 00:28:50.236 --rc geninfo_unexecuted_blocks=1 00:28:50.236 00:28:50.236 ' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:50.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.236 --rc genhtml_branch_coverage=1 00:28:50.236 --rc genhtml_function_coverage=1 00:28:50.236 --rc genhtml_legend=1 00:28:50.236 --rc geninfo_all_blocks=1 00:28:50.236 --rc geninfo_unexecuted_blocks=1 00:28:50.236 00:28:50.236 ' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:50.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.236 --rc genhtml_branch_coverage=1 00:28:50.236 --rc genhtml_function_coverage=1 00:28:50.236 --rc genhtml_legend=1 00:28:50.236 --rc geninfo_all_blocks=1 00:28:50.236 --rc geninfo_unexecuted_blocks=1 00:28:50.236 00:28:50.236 ' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:50.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.236 --rc genhtml_branch_coverage=1 00:28:50.236 --rc genhtml_function_coverage=1 00:28:50.236 --rc genhtml_legend=1 00:28:50.236 --rc geninfo_all_blocks=1 00:28:50.236 --rc geninfo_unexecuted_blocks=1 00:28:50.236 00:28:50.236 ' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.236 16:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.497 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:55.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:55.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:55.498 Found net devices under 0000:86:00.0: cvl_0_0 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:55.498 Found net devices under 0000:86:00.1: cvl_0_1 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.498 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.499 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.499 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.499 16:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:28:55.499 00:28:55.499 --- 10.0.0.2 ping statistics --- 00:28:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.499 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:55.499 00:28:55.499 --- 10.0.0.1 ping statistics --- 00:28:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.499 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3017375 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3017375 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3017375 ']' 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.499 [2024-11-04 16:40:22.111111] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.499 [2024-11-04 16:40:22.112148] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:55.499 [2024-11-04 16:40:22.112185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.499 [2024-11-04 16:40:22.185700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.499 [2024-11-04 16:40:22.226509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.499 [2024-11-04 16:40:22.226544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.499 [2024-11-04 16:40:22.226551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.499 [2024-11-04 16:40:22.226557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.499 [2024-11-04 16:40:22.226563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.499 [2024-11-04 16:40:22.227116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.499 [2024-11-04 16:40:22.292077] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.499 [2024-11-04 16:40:22.292310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.499 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 [2024-11-04 16:40:22.359660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 Malloc0 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.758 [2024-11-04 16:40:22.427592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3017497 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3017497 /var/tmp/bdevperf.sock 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3017497 ']' 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.758 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:55.759 [2024-11-04 16:40:22.477755] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:28:55.759 [2024-11-04 16:40:22.477796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017497 ] 00:28:55.759 [2024-11-04 16:40:22.540739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.017 [2024-11-04 16:40:22.583152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.017 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.017 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:56.017 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:56.017 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.017 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:56.276 NVMe0n1 00:28:56.276 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.276 16:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:56.276 Running I/O for 10 seconds... 00:28:58.587 11432.00 IOPS, 44.66 MiB/s [2024-11-04T15:40:26.374Z] 11943.00 IOPS, 46.65 MiB/s [2024-11-04T15:40:27.341Z] 12107.00 IOPS, 47.29 MiB/s [2024-11-04T15:40:28.281Z] 12213.00 IOPS, 47.71 MiB/s [2024-11-04T15:40:29.219Z] 12250.20 IOPS, 47.85 MiB/s [2024-11-04T15:40:30.155Z] 12286.17 IOPS, 47.99 MiB/s [2024-11-04T15:40:31.091Z] 12330.14 IOPS, 48.16 MiB/s [2024-11-04T15:40:32.027Z] 12339.88 IOPS, 48.20 MiB/s [2024-11-04T15:40:33.403Z] 12386.33 IOPS, 48.38 MiB/s [2024-11-04T15:40:33.403Z] 12406.30 IOPS, 48.46 MiB/s 00:29:06.579 Latency(us) 00:29:06.579 [2024-11-04T15:40:33.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.579 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:06.579 Verification LBA range: start 0x0 length 0x4000 00:29:06.579 NVMe0n1 : 10.05 12437.89 48.59 0.00 0.00 82055.92 10673.01 55175.07 00:29:06.579 [2024-11-04T15:40:33.403Z] =================================================================================================================== 00:29:06.579 [2024-11-04T15:40:33.403Z] Total : 12437.89 48.59 0.00 0.00 82055.92 10673.01 55175.07 00:29:06.579 { 00:29:06.579 "results": [ 00:29:06.579 { 00:29:06.579 "job": "NVMe0n1", 00:29:06.579 "core_mask": "0x1", 00:29:06.579 "workload": "verify", 00:29:06.579 "status": "finished", 00:29:06.579 "verify_range": { 00:29:06.579 "start": 0, 00:29:06.579 "length": 16384 00:29:06.579 }, 00:29:06.579 "queue_depth": 1024, 00:29:06.579 "io_size": 4096, 00:29:06.579 "runtime": 10.051869, 00:29:06.579 "iops": 12437.885929472419, 00:29:06.579 "mibps": 48.585491912001636, 00:29:06.579 "io_failed": 0, 00:29:06.579 "io_timeout": 0, 00:29:06.579 "avg_latency_us": 82055.92003412679, 00:29:06.579 "min_latency_us": 10673.005714285715, 00:29:06.579 "max_latency_us": 55175.07047619048 00:29:06.579 } 00:29:06.579 ], 00:29:06.579 "core_count": 1 00:29:06.579 } 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3017497 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3017497 ']' 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3017497 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017497 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017497' 00:29:06.579 killing process with pid 3017497 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3017497 00:29:06.579 Received shutdown signal, test time was about 10.000000 seconds 00:29:06.579 00:29:06.579 Latency(us) 00:29:06.579 [2024-11-04T15:40:33.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.579 [2024-11-04T15:40:33.403Z] =================================================================================================================== 00:29:06.579 [2024-11-04T15:40:33.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3017497 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.579 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.579 rmmod nvme_tcp 00:29:06.579 rmmod nvme_fabrics 00:29:06.579 rmmod nvme_keyring 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3017375 ']' 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3017375 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3017375 ']' 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3017375 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.580 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017375 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017375' 00:29:06.838 killing process with pid 3017375 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3017375 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3017375 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.838 16:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.376 00:29:09.376 real 0m18.803s 00:29:09.376 user 0m22.342s 00:29:09.376 sys 0m5.647s 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:09.376 ************************************ 00:29:09.376 END TEST nvmf_queue_depth 00:29:09.376 ************************************ 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:09.376 ************************************ 00:29:09.376 START TEST nvmf_target_multipath 00:29:09.376 ************************************ 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:09.376 * Looking for test storage... 00:29:09.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.376 --rc genhtml_branch_coverage=1 00:29:09.376 --rc genhtml_function_coverage=1 00:29:09.376 --rc genhtml_legend=1 00:29:09.376 --rc geninfo_all_blocks=1 00:29:09.376 --rc geninfo_unexecuted_blocks=1 00:29:09.376 00:29:09.376 ' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.376 --rc genhtml_branch_coverage=1 00:29:09.376 --rc genhtml_function_coverage=1 00:29:09.376 --rc genhtml_legend=1 00:29:09.376 --rc geninfo_all_blocks=1 00:29:09.376 --rc geninfo_unexecuted_blocks=1 00:29:09.376 00:29:09.376 ' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.376 --rc genhtml_branch_coverage=1 00:29:09.376 --rc genhtml_function_coverage=1 00:29:09.376 --rc genhtml_legend=1 00:29:09.376 --rc geninfo_all_blocks=1 00:29:09.376 --rc geninfo_unexecuted_blocks=1 00:29:09.376 00:29:09.376 ' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.376 --rc genhtml_branch_coverage=1 00:29:09.376 --rc genhtml_function_coverage=1 00:29:09.376 --rc genhtml_legend=1 00:29:09.376 --rc geninfo_all_blocks=1 00:29:09.376 --rc geninfo_unexecuted_blocks=1 00:29:09.376 00:29:09.376 ' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.376 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.377 16:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:14.649 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.649 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:14.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:14.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:14.650 Found net devices under 0000:86:00.0: cvl_0_0 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:14.650 Found net devices under 0000:86:00.1: cvl_0_1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.650 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:29:14.651 00:29:14.651 --- 10.0.0.2 ping statistics --- 00:29:14.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.651 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:14.651 00:29:14.651 --- 10.0.0.1 ping statistics --- 00:29:14.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.651 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:14.651 only one NIC for nvmf test 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.651 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.912 rmmod nvme_tcp 00:29:14.912 rmmod nvme_fabrics 00:29:14.912 rmmod nvme_keyring 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.912 16:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.818 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.819 00:29:16.819 real 0m7.885s 00:29:16.819 user 0m1.656s 00:29:16.819 sys 0m4.236s 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.819 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:16.819 ************************************ 00:29:16.819 END TEST nvmf_target_multipath 00:29:16.819 ************************************ 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:17.078 ************************************ 00:29:17.078 START TEST nvmf_zcopy 00:29:17.078 ************************************ 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:17.078 * Looking for test storage... 00:29:17.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.078 --rc genhtml_branch_coverage=1 00:29:17.078 --rc genhtml_function_coverage=1 00:29:17.078 --rc genhtml_legend=1 00:29:17.078 --rc geninfo_all_blocks=1 00:29:17.078 --rc geninfo_unexecuted_blocks=1 00:29:17.078 00:29:17.078 ' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.078 --rc genhtml_branch_coverage=1 00:29:17.078 --rc genhtml_function_coverage=1 00:29:17.078 --rc genhtml_legend=1 00:29:17.078 --rc geninfo_all_blocks=1 00:29:17.078 --rc geninfo_unexecuted_blocks=1 00:29:17.078 00:29:17.078 ' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.078 --rc genhtml_branch_coverage=1 00:29:17.078 --rc genhtml_function_coverage=1 00:29:17.078 --rc genhtml_legend=1 00:29:17.078 --rc geninfo_all_blocks=1 00:29:17.078 --rc geninfo_unexecuted_blocks=1 00:29:17.078 00:29:17.078 ' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:17.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.078 --rc genhtml_branch_coverage=1 00:29:17.078 --rc genhtml_function_coverage=1 00:29:17.078 --rc genhtml_legend=1 00:29:17.078 --rc geninfo_all_blocks=1 00:29:17.078 --rc geninfo_unexecuted_blocks=1 00:29:17.078 00:29:17.078 ' 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.078 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.079 16:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:22.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:22.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:22.502 Found net devices under 0000:86:00.0: cvl_0_0 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:22.502 Found net devices under 0000:86:00.1: cvl_0_1 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.502 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.503 16:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:29:22.503 00:29:22.503 --- 10.0.0.2 ping statistics --- 00:29:22.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.503 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:22.503 00:29:22.503 --- 10.0.0.1 ping statistics --- 00:29:22.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.503 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3026081 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3026081 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3026081 ']' 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.503 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.503 [2024-11-04 16:40:49.172170] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:22.503 [2024-11-04 16:40:49.173125] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:29:22.503 [2024-11-04 16:40:49.173157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.503 [2024-11-04 16:40:49.240113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.503 [2024-11-04 16:40:49.280485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.503 [2024-11-04 16:40:49.280521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.503 [2024-11-04 16:40:49.280528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.503 [2024-11-04 16:40:49.280535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.503 [2024-11-04 16:40:49.280540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.503 [2024-11-04 16:40:49.281084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.763 [2024-11-04 16:40:49.347535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:22.763 [2024-11-04 16:40:49.347771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 [2024-11-04 16:40:49.409626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 [2024-11-04 16:40:49.433718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 malloc0 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.763 { 00:29:22.763 "params": { 00:29:22.763 "name": "Nvme$subsystem", 00:29:22.763 "trtype": "$TEST_TRANSPORT", 00:29:22.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.763 "adrfam": "ipv4", 00:29:22.763 "trsvcid": "$NVMF_PORT", 00:29:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.763 "hdgst": ${hdgst:-false}, 00:29:22.763 "ddgst": ${ddgst:-false} 00:29:22.763 }, 00:29:22.763 "method": "bdev_nvme_attach_controller" 00:29:22.763 } 00:29:22.763 EOF 00:29:22.763 )") 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:22.763 16:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.763 "params": { 00:29:22.763 "name": "Nvme1", 00:29:22.763 "trtype": "tcp", 00:29:22.763 "traddr": "10.0.0.2", 00:29:22.763 "adrfam": "ipv4", 00:29:22.763 "trsvcid": "4420", 00:29:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.763 "hdgst": false, 00:29:22.763 "ddgst": false 00:29:22.763 }, 00:29:22.763 "method": "bdev_nvme_attach_controller" 00:29:22.763 }' 00:29:22.763 [2024-11-04 16:40:49.515241] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:29:22.763 [2024-11-04 16:40:49.515284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026163 ] 00:29:22.763 [2024-11-04 16:40:49.578572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.022 [2024-11-04 16:40:49.619927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.022 Running I/O for 10 seconds... 00:29:25.338 8369.00 IOPS, 65.38 MiB/s [2024-11-04T15:40:53.098Z] 8485.50 IOPS, 66.29 MiB/s [2024-11-04T15:40:54.035Z] 8529.67 IOPS, 66.64 MiB/s [2024-11-04T15:40:54.973Z] 8550.50 IOPS, 66.80 MiB/s [2024-11-04T15:40:55.909Z] 8562.60 IOPS, 66.90 MiB/s [2024-11-04T15:40:56.846Z] 8581.17 IOPS, 67.04 MiB/s [2024-11-04T15:40:58.224Z] 8586.00 IOPS, 67.08 MiB/s [2024-11-04T15:40:59.160Z] 8586.88 IOPS, 67.08 MiB/s [2024-11-04T15:41:00.095Z] 8590.56 IOPS, 67.11 MiB/s [2024-11-04T15:41:00.095Z] 8594.40 IOPS, 67.14 MiB/s 00:29:33.271 Latency(us) 00:29:33.271 [2024-11-04T15:41:00.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.271 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:33.271 Verification LBA range: start 0x0 length 0x1000 00:29:33.271 Nvme1n1 : 10.01 8599.00 67.18 0.00 0.00 14843.63 409.60 21346.01 00:29:33.271 [2024-11-04T15:41:00.095Z] =================================================================================================================== 00:29:33.271 [2024-11-04T15:41:00.095Z] Total : 8599.00 67.18 0.00 0.00 14843.63 409.60 21346.01 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3027766 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:33.271 { 00:29:33.271 "params": { 00:29:33.271 "name": "Nvme$subsystem", 00:29:33.271 "trtype": "$TEST_TRANSPORT", 00:29:33.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.271 "adrfam": "ipv4", 00:29:33.271 "trsvcid": "$NVMF_PORT", 00:29:33.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.271 "hdgst": ${hdgst:-false}, 00:29:33.271 "ddgst": ${ddgst:-false} 00:29:33.271 }, 00:29:33.271 "method": "bdev_nvme_attach_controller" 00:29:33.271 } 00:29:33.271 EOF 00:29:33.271 )") 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:33.271 [2024-11-04 16:41:00.009410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.009440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:33.271 16:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:33.271 "params": { 00:29:33.271 "name": "Nvme1", 00:29:33.271 "trtype": "tcp", 00:29:33.271 "traddr": "10.0.0.2", 00:29:33.271 "adrfam": "ipv4", 00:29:33.271 "trsvcid": "4420", 00:29:33.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:33.271 "hdgst": false, 00:29:33.271 "ddgst": false 00:29:33.271 }, 00:29:33.271 "method": "bdev_nvme_attach_controller" 00:29:33.271 }' 00:29:33.271 [2024-11-04 16:41:00.017383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.017398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.025374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.025384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.033373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.033382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.041376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.041385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.050914] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:29:33.271 [2024-11-04 16:41:00.050955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027766 ] 00:29:33.271 [2024-11-04 16:41:00.053377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.053388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.061377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.061388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.069375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.069385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.077373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.077383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.085375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.085385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.271 [2024-11-04 16:41:00.093389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.271 [2024-11-04 16:41:00.093402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.101375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.101385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.109377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.109387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.115155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.530 [2024-11-04 16:41:00.117374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.117383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.125375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.125388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.133374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.133388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.141374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.141383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.149375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.149384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.156528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.530 [2024-11-04 16:41:00.157375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.157385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.165377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.165388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.173382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.173399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.181377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.181389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.189375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.189386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.197375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.197386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.205376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.205384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.213381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.213393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.221376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.221386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.229374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.229383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.237387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.237403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.245383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.245398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.253379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.253391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.261380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.261394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.269410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.269425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.277379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.277392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.285379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.285393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.293638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.293655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.301378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.301391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 Running I/O for 5 seconds... 00:29:33.530 [2024-11-04 16:41:00.315228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.315247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.330408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.330427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.530 [2024-11-04 16:41:00.342018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.530 [2024-11-04 16:41:00.342037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.354873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.354892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.364763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.364783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.379163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.379182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.386982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.387001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.396862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.396881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.411255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.411274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.420115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.420133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.434532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.434551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.445217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.445236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.459458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.459477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.474474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.474493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.485957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.485975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.498844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.498871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.508728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.508746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.523628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.523646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.538563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.538581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.547582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.547608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.554241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.554258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.564983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.565001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.579118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.579137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.788 [2024-11-04 16:41:00.594543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.788 [2024-11-04 16:41:00.594562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:33.789 [2024-11-04 16:41:00.603760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:33.789 [2024-11-04 16:41:00.603778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.618781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.618801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.629472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.629491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.636295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.636312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.648203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.648221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.663101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.663119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.673498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.673516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.680284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.680302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.694095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.694113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.706739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.706757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.717136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.717159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.731379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.731398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.738466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.738484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.748148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.748166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.047 [2024-11-04 16:41:00.762535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.047 [2024-11-04 16:41:00.762553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.773662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.773680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.787581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.787607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.801937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.801954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.813246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.813264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.827539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.827558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.842444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.842461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.853471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.853489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.048 [2024-11-04 16:41:00.860029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.048 [2024-11-04 16:41:00.860046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.874874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.874891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.885350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.885368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.892019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.892036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.906340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.906358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.919044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.919062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.929480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.929498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.936404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.936425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.950515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.950532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.961301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.961320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.968186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.968203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.982325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.982343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:00.994822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:00.994841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.005928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.005945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.018475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.018493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.029617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.029634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.043832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.043852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.058839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.058857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.070072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.070090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.082689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.082707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.093857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.093874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.107005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.107023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.306 [2024-11-04 16:41:01.117984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.306 [2024-11-04 16:41:01.118001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.131278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.131296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.138122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.138139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.149405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.149422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.155925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.155947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.170564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.170582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.179654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.179671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.186420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.186437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.197500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.197518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.204427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.204444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.216447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.216465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.231111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.231129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.240152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.240170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.254619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.254637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.264321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.264338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.278918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.278937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.287956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.287973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.303212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.303230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 16660.00 IOPS, 130.16 MiB/s [2024-11-04T15:41:01.388Z] [2024-11-04 16:41:01.312506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.312524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.327353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.327370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.342726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.342744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.353320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.353338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.360596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.564 [2024-11-04 16:41:01.360620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.564 [2024-11-04 16:41:01.372750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.565 [2024-11-04 16:41:01.372768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.565 [2024-11-04 16:41:01.386879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.565 [2024-11-04 16:41:01.386897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.395698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.395716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.402500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.402517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.412780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.412799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.427343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.427361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.441964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.441982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.454772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.454790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.464938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.464955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.479723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.479741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.494249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.494268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.504885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.504904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.519473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.519492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.534364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.534381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.545683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.545700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.559156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.559174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.569362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.569380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.576157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.576175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.587803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.587821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.594567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.594584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.604132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.604150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.618839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.618857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.628558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.628576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:34.823 [2024-11-04 16:41:01.643375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:34.823 [2024-11-04 16:41:01.643393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.652392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.652410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.666973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.666991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.677443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.677460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.684339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.684356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.696368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.696386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.711206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.711223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.725719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.725737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.738278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.738295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.750933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.750951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.759694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.759712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.766624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.766642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.776298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.776316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.791147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.791167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.805884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.805902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.816430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.816450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.831038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.831057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.840089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.840108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.855155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.855175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.870107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.870126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.881612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.881629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.895412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.895430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.082 [2024-11-04 16:41:01.902403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.082 [2024-11-04 16:41:01.902421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.913642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.913660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.920251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.920268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.932846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.932865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.947090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.947108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.956054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.956071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.970809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.970828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.985233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.985251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:01.998227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:01.998244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.010822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.010840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.021774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.021791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.035044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.035068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.044595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.044620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.059301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.059319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.074314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.074333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.085708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.085727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.099447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.099465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.114049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.114067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.340 [2024-11-04 16:41:02.124091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.340 [2024-11-04 16:41:02.124110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.341 [2024-11-04 16:41:02.138924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.341 [2024-11-04 16:41:02.138943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.341 [2024-11-04 16:41:02.148783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.341 [2024-11-04 16:41:02.148802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.341 [2024-11-04 16:41:02.163545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.341 [2024-11-04 16:41:02.163564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.178349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.178367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.188824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.188842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.203241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.203260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.211972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.211991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.226665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.226684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.235860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.235877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.250413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.250430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.261037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.261054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.274954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.274975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.285544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.285563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.292400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.292417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.304460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.304478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 16672.50 IOPS, 130.25 MiB/s [2024-11-04T15:41:02.423Z] [2024-11-04 16:41:02.319523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.319541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.334229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.334247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.344913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.344930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.359339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.359358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.366906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.366924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.376248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.376265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.391113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.391131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.400157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.400174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.599 [2024-11-04 16:41:02.414829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.599 [2024-11-04 16:41:02.414847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.423944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.423963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.431009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.431026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.440863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.440881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.455020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.455037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.463721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.463739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.478351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.478368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.488818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.488841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.502913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.502931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.513101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.513119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.527445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.527463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.534293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.534313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.545455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.545473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.552386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.552404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.564449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.564467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.579435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.579453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.593691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.593710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.605109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.605127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.619547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.619565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.633899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.633916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.645494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.645512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.652124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.652142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.666181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.666199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:35.858 [2024-11-04 16:41:02.677021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:35.858 [2024-11-04 16:41:02.677038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.691170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.691188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.706267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.706284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.718083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.718100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.731018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.731036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.738266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.738283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.749624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.749642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.756151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.756168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.768185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.768203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.782837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.782855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.792167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.792184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.807014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.807032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.822154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.822172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.833234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.833252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.847301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.847318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.862183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.862201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.873140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.873157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.887103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.887121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.894761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.894778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.904080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.904097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.919020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.919038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.116 [2024-11-04 16:41:02.927954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.116 [2024-11-04 16:41:02.927971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:02.942973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:02.942991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:02.952117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:02.952135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:02.966815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:02.966834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:02.976629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:02.976647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:02.991378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:02.991396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.006147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.006166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.016400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.016419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.031209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.031227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.038905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.038923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.049774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.049792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.063321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.063339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.070848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.070865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.080452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.080469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.095222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.095239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.104269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.104287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.119258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.119277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.133965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.133982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.145132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.145150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.159696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.159714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.174495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.174513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.183765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.183783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.375 [2024-11-04 16:41:03.198504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.375 [2024-11-04 16:41:03.198522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.633 [2024-11-04 16:41:03.207526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.633 [2024-11-04 16:41:03.207543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.633 [2024-11-04 16:41:03.214305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.633 [2024-11-04 16:41:03.214323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.633 [2024-11-04 16:41:03.225565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.633 [2024-11-04 16:41:03.225584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.633 [2024-11-04 16:41:03.232378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.633 [2024-11-04 16:41:03.232396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.244069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.244087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.258138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.258157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.269257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.269277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.276013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.276032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.290912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.290932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.301801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.301819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.314670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.314689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 16661.00 IOPS, 130.16 MiB/s [2024-11-04T15:41:03.458Z] [2024-11-04 16:41:03.325348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.325367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.331867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.331886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.346010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.346027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.358815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.358833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.367680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.367703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.382711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.382729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.391678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.391698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.406373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.406391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.415728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.415746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.430288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.430305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.442430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.442448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.634 [2024-11-04 16:41:03.453687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.634 [2024-11-04 16:41:03.453705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.467045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.467063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.477159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.477177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.491240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.491258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.498198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.498216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.509286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.509305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.522964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.522983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.533422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.533440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.540308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.540326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.552100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.552119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.566808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.566827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.581730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.581748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.593383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.593406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.600220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.600238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.614403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.614422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.625747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.625764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.639747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.639765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.654521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.654539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.664466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.664484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.678963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.678981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.688540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.688558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:36.892 [2024-11-04 16:41:03.703544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:36.892 [2024-11-04 16:41:03.703562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.718404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.718422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.727727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.727745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.742290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.742307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.751674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.751692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.766240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.766258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.777430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.777448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.784082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.784099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.799389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.799408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.813935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.813953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.824667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.824692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.839446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.839464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.853856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.853874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.865618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.865636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.872422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.872440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.884417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.884435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.898907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.898925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.907959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.907976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.923313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.923331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.938186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.938203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.948881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.948899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.151 [2024-11-04 16:41:03.963667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.151 [2024-11-04 16:41:03.963685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:03.978275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:03.978293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:03.988568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:03.988585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.003009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.003027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.011920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.011938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.026700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.026718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.036721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.036739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.051552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.051571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.066179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.066202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.075568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.075585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.082481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.082499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.091988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.092006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.106667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.106685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.117366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.117384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.124192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.124209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.138352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.138370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.148898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.148917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.163574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.163592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.178017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.178034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.188478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.188496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.203097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.203115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.211967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.211984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.410 [2024-11-04 16:41:04.226919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.410 [2024-11-04 16:41:04.226937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.668 [2024-11-04 16:41:04.236001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.668 [2024-11-04 16:41:04.236018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.250959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.250977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.259772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.259790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.266521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.266538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.276277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.276295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.290954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.290971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.300524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.300541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.315423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.315442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 16661.00 IOPS, 130.16 MiB/s [2024-11-04T15:41:04.493Z] [2024-11-04 16:41:04.329813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.329831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.342142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.342160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.354947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.354965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.365028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.365046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.379401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.379420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.388230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.388247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.403451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.403469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.418383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.418401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.430820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.430838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.441159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.441176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.455324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.455342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.469970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.469988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.669 [2024-11-04 16:41:04.481264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.669 [2024-11-04 16:41:04.481282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.495467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.495485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.502854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.502871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.512462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.512480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.527482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.527501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.541973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.541991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.554114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.554131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.567071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.567090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.575831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.575849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.582853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.582871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.592704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.592723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.607424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.607442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.622318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.622335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.633888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.633905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.647173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.647191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.654063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.654080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.665235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.665254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.679212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.679231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.688071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.688089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.702685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.702704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.712009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.712029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.726718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.726741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.736224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.736243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:37.927 [2024-11-04 16:41:04.750952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:37.927 [2024-11-04 16:41:04.750970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.759730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.759748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.766433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.766450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.781299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.781320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.788515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.788533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.802508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.802527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.813808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.813826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.827284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.827302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.841846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.841864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.853988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.854006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.866522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.866540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.877489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.877507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.884174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.884194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.898342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.898361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.909916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.909935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.922486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.922505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.933857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.933875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.947243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.947265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.962114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.962132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.973377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.973396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.980276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.980294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:04.994612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:04.994630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.186 [2024-11-04 16:41:05.004455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.186 [2024-11-04 16:41:05.004473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.445 [2024-11-04 16:41:05.018917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.445 [2024-11-04 16:41:05.018935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.445 [2024-11-04 16:41:05.027645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.445 [2024-11-04 16:41:05.027663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.445 [2024-11-04 16:41:05.034711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.034729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.044419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.044437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.059443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.059462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.074147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.074166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.086274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.086293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.098906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.098924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.109244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.109263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.123474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.123492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.138416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.138434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.149124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.149143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.163334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.163353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.170749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.170770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.180169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.180188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.194834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.194851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.204215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.204233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.218879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.218897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.229006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.229024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.243128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.243146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.257931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.257949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.446 [2024-11-04 16:41:05.269266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.446 [2024-11-04 16:41:05.269284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.283394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.283412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.298327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.298345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.309732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.309749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 16668.20 IOPS, 130.22 MiB/s [2024-11-04T15:41:05.529Z] [2024-11-04 16:41:05.322409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.322426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 00:29:38.705 Latency(us) 00:29:38.705 [2024-11-04T15:41:05.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.705 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:38.705 Nvme1n1 : 5.01 16670.99 130.24 0.00 0.00 7671.14 2028.50 12732.71 00:29:38.705 [2024-11-04T15:41:05.529Z] =================================================================================================================== 00:29:38.705 [2024-11-04T15:41:05.529Z] Total : 16670.99 130.24 0.00 0.00 7671.14 2028.50 12732.71 00:29:38.705 [2024-11-04 16:41:05.329381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.329397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.337378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.337393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.345382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.345396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.353384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.353396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.361388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.361403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.369380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.369393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.377380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.377391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.385379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.385390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.393376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.393388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.401377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.401389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.409376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.409387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.417378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.417391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.425374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.425385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.433375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.433385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.441374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.441382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.449373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.449382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.457379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.457389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.465375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.465384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.473374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.473383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.705 [2024-11-04 16:41:05.481375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:38.705 [2024-11-04 16:41:05.481384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:38.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3027766) - No such process 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3027766 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:38.706 delay0 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.706 16:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:38.964 [2024-11-04 16:41:05.609260] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:45.523 Initializing NVMe Controllers 00:29:45.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.523 Initialization complete. Launching workers. 00:29:45.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3227 00:29:45.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3503, failed to submit 44 00:29:45.523 success 3352, unsuccessful 151, failed 0 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.523 rmmod nvme_tcp 00:29:45.523 rmmod nvme_fabrics 00:29:45.523 rmmod nvme_keyring 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3026081 ']' 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3026081 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3026081 ']' 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3026081 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3026081 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3026081' 00:29:45.523 killing process with pid 3026081 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3026081 00:29:45.523 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3026081 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.782 16:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.316 00:29:48.316 real 0m30.845s 00:29:48.316 user 0m40.250s 00:29:48.316 sys 0m11.802s 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:48.316 ************************************ 00:29:48.316 END TEST nvmf_zcopy 00:29:48.316 ************************************ 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:48.316 ************************************ 00:29:48.316 START TEST nvmf_nmic 00:29:48.316 ************************************ 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:48.316 * Looking for test storage... 00:29:48.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.316 --rc genhtml_branch_coverage=1 00:29:48.316 --rc genhtml_function_coverage=1 00:29:48.316 --rc genhtml_legend=1 00:29:48.316 --rc geninfo_all_blocks=1 00:29:48.316 --rc geninfo_unexecuted_blocks=1 00:29:48.316 00:29:48.316 ' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.316 --rc genhtml_branch_coverage=1 00:29:48.316 --rc genhtml_function_coverage=1 00:29:48.316 --rc genhtml_legend=1 00:29:48.316 --rc geninfo_all_blocks=1 00:29:48.316 --rc geninfo_unexecuted_blocks=1 00:29:48.316 00:29:48.316 ' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.316 --rc genhtml_branch_coverage=1 00:29:48.316 --rc genhtml_function_coverage=1 00:29:48.316 --rc genhtml_legend=1 00:29:48.316 --rc geninfo_all_blocks=1 00:29:48.316 --rc geninfo_unexecuted_blocks=1 00:29:48.316 00:29:48.316 ' 00:29:48.316 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.316 --rc genhtml_branch_coverage=1 00:29:48.316 --rc genhtml_function_coverage=1 00:29:48.316 --rc genhtml_legend=1 00:29:48.316 --rc geninfo_all_blocks=1 00:29:48.317 --rc geninfo_unexecuted_blocks=1 00:29:48.317 00:29:48.317 ' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.317 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.581 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:53.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:53.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:53.582 Found net devices under 0000:86:00.0: cvl_0_0 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:53.582 Found net devices under 0000:86:00.1: cvl_0_1 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.582 16:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:29:53.582 00:29:53.582 --- 10.0.0.2 ping statistics --- 00:29:53.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.582 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:53.582 00:29:53.582 --- 10.0.0.1 ping statistics --- 00:29:53.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.582 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3033117 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3033117 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3033117 ']' 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.582 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.583 [2024-11-04 16:41:20.196035] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.583 [2024-11-04 16:41:20.196966] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:29:53.583 [2024-11-04 16:41:20.196997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.583 [2024-11-04 16:41:20.265040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.583 [2024-11-04 16:41:20.309057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.583 [2024-11-04 16:41:20.309092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.583 [2024-11-04 16:41:20.309102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.583 [2024-11-04 16:41:20.309108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.583 [2024-11-04 16:41:20.309113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.583 [2024-11-04 16:41:20.310644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.583 [2024-11-04 16:41:20.310742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.583 [2024-11-04 16:41:20.310759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.583 [2024-11-04 16:41:20.310761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.583 [2024-11-04 16:41:20.378093] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.583 [2024-11-04 16:41:20.378164] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:53.583 [2024-11-04 16:41:20.378306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:53.583 [2024-11-04 16:41:20.378506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.583 [2024-11-04 16:41:20.378688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.583 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.841 [2024-11-04 16:41:20.443505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.841 Malloc0 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.841 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 [2024-11-04 16:41:20.511490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:53.842 test case1: single bdev can't be used in multiple subsystems 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 [2024-11-04 16:41:20.539195] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:53.842 [2024-11-04 16:41:20.539215] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:53.842 [2024-11-04 16:41:20.539222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.842 request: 00:29:53.842 { 00:29:53.842 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:53.842 "namespace": { 00:29:53.842 "bdev_name": "Malloc0", 00:29:53.842 "no_auto_visible": false 00:29:53.842 }, 00:29:53.842 "method": "nvmf_subsystem_add_ns", 00:29:53.842 "req_id": 1 00:29:53.842 } 00:29:53.842 Got JSON-RPC error response 00:29:53.842 response: 00:29:53.842 { 00:29:53.842 "code": -32602, 00:29:53.842 "message": "Invalid parameters" 00:29:53.842 } 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:53.842 Adding namespace failed - expected result. 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:53.842 test case2: host connect to nvmf target in multiple paths 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:53.842 [2024-11-04 16:41:20.551288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.842 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:54.100 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:54.358 16:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:54.358 16:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:54.358 16:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:54.358 16:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:54.358 16:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:56.886 16:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:56.886 [global] 00:29:56.886 thread=1 00:29:56.886 invalidate=1 00:29:56.886 rw=write 00:29:56.886 time_based=1 00:29:56.886 runtime=1 00:29:56.886 ioengine=libaio 00:29:56.886 direct=1 00:29:56.886 bs=4096 00:29:56.886 iodepth=1 00:29:56.886 norandommap=0 00:29:56.886 numjobs=1 00:29:56.886 00:29:56.886 verify_dump=1 00:29:56.886 verify_backlog=512 00:29:56.886 verify_state_save=0 00:29:56.886 do_verify=1 00:29:56.886 verify=crc32c-intel 00:29:56.886 [job0] 00:29:56.886 filename=/dev/nvme0n1 00:29:56.886 Could not set queue depth (nvme0n1) 00:29:56.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:56.886 fio-3.35 00:29:56.886 Starting 1 thread 00:29:57.819 00:29:57.819 job0: (groupid=0, jobs=1): err= 0: pid=3033813: Mon Nov 4 16:41:24 2024 00:29:57.819 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:29:57.819 slat (nsec): min=10415, max=24571, avg=21253.23, stdev=2514.04 00:29:57.819 clat (usec): min=40929, max=41484, avg=40992.96, stdev=111.71 00:29:57.819 lat (usec): min=40950, max=41495, avg=41014.21, stdev=109.35 00:29:57.819 clat percentiles (usec): 00:29:57.819 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:57.819 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:57.819 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:57.819 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:57.819 | 99.99th=[41681] 00:29:57.819 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:29:57.819 slat (nsec): min=9908, max=41724, avg=11750.47, stdev=3250.60 00:29:57.819 clat (usec): min=134, max=1699, avg=209.78, stdev=136.64 00:29:57.819 lat (usec): min=146, max=1710, avg=221.53, stdev=136.65 00:29:57.819 clat percentiles (usec): 00:29:57.819 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:29:57.819 | 30.00th=[ 161], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 196], 00:29:57.819 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 260], 00:29:57.819 | 99.00th=[ 1270], 99.50th=[ 1336], 99.90th=[ 1696], 99.95th=[ 1696], 00:29:57.819 | 99.99th=[ 1696] 00:29:57.819 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:57.819 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:57.819 lat (usec) : 250=74.53%, 500=20.22% 00:29:57.819 lat (msec) : 2=1.12%, 50=4.12% 00:29:57.819 cpu : usr=0.39%, sys=0.88%, ctx=534, majf=0, minf=1 00:29:57.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.819 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:57.819 00:29:57.819 Run status group 0 (all jobs): 00:29:57.819 READ: bw=86.4KiB/s (88.5kB/s), 86.4KiB/s-86.4KiB/s (88.5kB/s-88.5kB/s), io=88.0KiB (90.1kB), run=1018-1018msec 00:29:57.819 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:29:57.819 00:29:57.819 Disk stats (read/write): 00:29:57.819 nvme0n1: ios=69/512, merge=0/0, ticks=804/104, in_queue=908, util=91.18% 00:29:57.819 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:58.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:58.077 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.078 rmmod nvme_tcp 00:29:58.078 rmmod nvme_fabrics 00:29:58.078 rmmod nvme_keyring 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3033117 ']' 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3033117 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3033117 ']' 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3033117 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.078 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033117 00:29:58.336 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.336 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.336 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033117' 00:29:58.336 killing process with pid 3033117 00:29:58.336 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3033117 00:29:58.336 16:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3033117 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.336 16:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.868 00:30:00.868 real 0m12.572s 00:30:00.868 user 0m24.353s 00:30:00.868 sys 0m5.731s 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:00.868 ************************************ 00:30:00.868 END TEST nvmf_nmic 00:30:00.868 ************************************ 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.868 ************************************ 00:30:00.868 START TEST nvmf_fio_target 00:30:00.868 ************************************ 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:00.868 * Looking for test storage... 00:30:00.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.868 --rc genhtml_branch_coverage=1 00:30:00.868 --rc genhtml_function_coverage=1 00:30:00.868 --rc genhtml_legend=1 00:30:00.868 --rc geninfo_all_blocks=1 00:30:00.868 --rc geninfo_unexecuted_blocks=1 00:30:00.868 00:30:00.868 ' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.868 --rc genhtml_branch_coverage=1 00:30:00.868 --rc genhtml_function_coverage=1 00:30:00.868 --rc genhtml_legend=1 00:30:00.868 --rc geninfo_all_blocks=1 00:30:00.868 --rc geninfo_unexecuted_blocks=1 00:30:00.868 00:30:00.868 ' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.868 --rc genhtml_branch_coverage=1 00:30:00.868 --rc genhtml_function_coverage=1 00:30:00.868 --rc genhtml_legend=1 00:30:00.868 --rc geninfo_all_blocks=1 00:30:00.868 --rc geninfo_unexecuted_blocks=1 00:30:00.868 00:30:00.868 ' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.868 --rc genhtml_branch_coverage=1 00:30:00.868 --rc genhtml_function_coverage=1 00:30:00.868 --rc genhtml_legend=1 00:30:00.868 --rc geninfo_all_blocks=1 00:30:00.868 --rc geninfo_unexecuted_blocks=1 00:30:00.868 00:30:00.868 ' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.868 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.869 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:06.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:06.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.126 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:06.127 Found net devices under 0000:86:00.0: cvl_0_0 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:06.127 Found net devices under 0000:86:00.1: cvl_0_1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:30:06.127 00:30:06.127 --- 10.0.0.2 ping statistics --- 00:30:06.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.127 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:06.127 00:30:06.127 --- 10.0.0.1 ping statistics --- 00:30:06.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.127 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3037483 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3037483 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3037483 ']' 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.127 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.385 [2024-11-04 16:41:32.952825] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.385 [2024-11-04 16:41:32.953706] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:30:06.385 [2024-11-04 16:41:32.953737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.385 [2024-11-04 16:41:33.026207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.385 [2024-11-04 16:41:33.069714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.385 [2024-11-04 16:41:33.069751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.385 [2024-11-04 16:41:33.069760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.385 [2024-11-04 16:41:33.069766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.385 [2024-11-04 16:41:33.069771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.385 [2024-11-04 16:41:33.071362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.385 [2024-11-04 16:41:33.071381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.385 [2024-11-04 16:41:33.071481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.385 [2024-11-04 16:41:33.071483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.385 [2024-11-04 16:41:33.138745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.385 [2024-11-04 16:41:33.138874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:06.385 [2024-11-04 16:41:33.139150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:06.385 [2024-11-04 16:41:33.139397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.385 [2024-11-04 16:41:33.139575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.385 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:06.643 [2024-11-04 16:41:33.368019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.643 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:06.900 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:06.900 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.158 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:07.158 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.416 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:07.416 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.673 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:07.673 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:07.673 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.931 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:07.931 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:08.189 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:08.189 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:08.446 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:08.446 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:08.446 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:08.703 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:08.703 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.960 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:08.960 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:09.217 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.217 [2024-11-04 16:41:35.952160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.217 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:09.474 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:09.731 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:09.988 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:11.882 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:11.882 [global] 00:30:11.882 thread=1 00:30:11.882 invalidate=1 00:30:11.882 rw=write 00:30:11.882 time_based=1 00:30:11.882 runtime=1 00:30:11.882 ioengine=libaio 00:30:11.882 direct=1 00:30:11.882 bs=4096 00:30:11.882 iodepth=1 00:30:11.882 norandommap=0 00:30:11.882 numjobs=1 00:30:11.882 00:30:11.882 verify_dump=1 00:30:11.882 verify_backlog=512 00:30:11.882 verify_state_save=0 00:30:11.882 do_verify=1 00:30:11.882 verify=crc32c-intel 00:30:11.882 [job0] 00:30:11.882 filename=/dev/nvme0n1 00:30:11.882 [job1] 00:30:11.882 filename=/dev/nvme0n2 00:30:11.882 [job2] 00:30:11.882 filename=/dev/nvme0n3 00:30:11.882 [job3] 00:30:11.882 filename=/dev/nvme0n4 00:30:12.139 Could not set queue depth (nvme0n1) 00:30:12.139 Could not set queue depth (nvme0n2) 00:30:12.139 Could not set queue depth (nvme0n3) 00:30:12.139 Could not set queue depth (nvme0n4) 00:30:12.396 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.396 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.396 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.396 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.396 fio-3.35 00:30:12.396 Starting 4 threads 00:30:13.764 00:30:13.764 job0: (groupid=0, jobs=1): err= 0: pid=3038597: Mon Nov 4 16:41:40 2024 00:30:13.764 read: IOPS=1182, BW=4731KiB/s (4845kB/s)(4736KiB/1001msec) 00:30:13.764 slat (nsec): min=6720, max=27601, avg=7898.40, stdev=1900.37 00:30:13.764 clat (usec): min=185, max=41367, avg=594.32, stdev=3724.30 00:30:13.764 lat (usec): min=192, max=41374, avg=602.22, stdev=3724.35 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 223], 00:30:13.764 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 249], 00:30:13.764 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 334], 00:30:13.764 | 99.00th=[ 510], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:13.764 | 99.99th=[41157] 00:30:13.764 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:13.764 slat (usec): min=9, max=15543, avg=21.12, stdev=396.31 00:30:13.764 clat (usec): min=118, max=1860, avg=161.33, stdev=59.96 00:30:13.764 lat (usec): min=129, max=15832, avg=182.45, stdev=404.07 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:30:13.764 | 30.00th=[ 137], 40.00th=[ 145], 50.00th=[ 157], 60.00th=[ 163], 00:30:13.764 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 196], 95.00th=[ 241], 00:30:13.764 | 99.00th=[ 253], 99.50th=[ 322], 99.90th=[ 701], 99.95th=[ 1860], 00:30:13.764 | 99.99th=[ 1860] 00:30:13.764 bw ( KiB/s): min= 8040, max= 8040, per=29.48%, avg=8040.00, stdev= 0.00, samples=1 00:30:13.764 iops : min= 2010, max= 2010, avg=2010.00, stdev= 0.00, samples=1 00:30:13.764 lat (usec) : 250=83.68%, 500=15.66%, 750=0.26% 00:30:13.764 lat (msec) : 2=0.04%, 50=0.37% 00:30:13.764 cpu : usr=2.10%, sys=2.00%, ctx=2724, majf=0, minf=1 00:30:13.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.764 issued rwts: total=1184,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.764 job1: (groupid=0, jobs=1): err= 0: pid=3038598: Mon Nov 4 16:41:40 2024 00:30:13.764 read: IOPS=1245, BW=4983KiB/s (5102kB/s)(5152KiB/1034msec) 00:30:13.764 slat (nsec): min=6299, max=32884, avg=8003.01, stdev=2649.84 00:30:13.764 clat (usec): min=199, max=41237, avg=582.90, stdev=3752.43 00:30:13.764 lat (usec): min=206, max=41244, avg=590.90, stdev=3752.79 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:30:13.764 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:30:13.764 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 265], 00:30:13.764 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:13.764 | 99.99th=[41157] 00:30:13.764 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:30:13.764 slat (nsec): min=9177, max=38135, avg=10504.49, stdev=1707.44 00:30:13.764 clat (usec): min=132, max=361, avg=162.61, stdev=18.20 00:30:13.764 lat (usec): min=142, max=399, avg=173.11, stdev=18.92 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:30:13.764 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:30:13.764 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 198], 00:30:13.764 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 231], 99.95th=[ 363], 00:30:13.764 | 99.99th=[ 363] 00:30:13.764 bw ( KiB/s): min= 4096, max= 8192, per=22.53%, avg=6144.00, stdev=2896.31, samples=2 00:30:13.764 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:30:13.764 lat (usec) : 250=93.02%, 500=6.55%, 750=0.04% 00:30:13.764 lat (msec) : 50=0.39% 00:30:13.764 cpu : usr=1.45%, sys=2.61%, ctx=2824, majf=0, minf=2 00:30:13.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.764 issued rwts: total=1288,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.764 job2: (groupid=0, jobs=1): err= 0: pid=3038599: Mon Nov 4 16:41:40 2024 00:30:13.764 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:13.764 slat (nsec): min=7531, max=44996, avg=8643.76, stdev=1311.44 00:30:13.764 clat (usec): min=183, max=1536, avg=254.74, stdev=51.72 00:30:13.764 lat (usec): min=191, max=1544, avg=263.38, stdev=51.79 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 192], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:30:13.764 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:30:13.764 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 318], 00:30:13.764 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 506], 99.95th=[ 523], 00:30:13.764 | 99.99th=[ 1532] 00:30:13.764 write: IOPS=2439, BW=9758KiB/s (9992kB/s)(9768KiB/1001msec); 0 zone resets 00:30:13.764 slat (usec): min=3, max=882, avg=11.63, stdev=17.93 00:30:13.764 clat (usec): min=134, max=465, avg=171.45, stdev=22.47 00:30:13.764 lat (usec): min=146, max=1068, avg=183.08, stdev=29.05 00:30:13.764 clat percentiles (usec): 00:30:13.764 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:30:13.764 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:30:13.764 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 208], 00:30:13.764 | 99.00th=[ 245], 99.50th=[ 269], 99.90th=[ 408], 99.95th=[ 453], 00:30:13.765 | 99.99th=[ 465] 00:30:13.765 bw ( KiB/s): min= 9368, max= 9368, per=34.35%, avg=9368.00, stdev= 0.00, samples=1 00:30:13.765 iops : min= 2342, max= 2342, avg=2342.00, stdev= 0.00, samples=1 00:30:13.765 lat (usec) : 250=82.23%, 500=17.71%, 750=0.04% 00:30:13.765 lat (msec) : 2=0.02% 00:30:13.765 cpu : usr=3.10%, sys=7.60%, ctx=4492, majf=0, minf=1 00:30:13.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.765 issued rwts: total=2048,2442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.765 job3: (groupid=0, jobs=1): err= 0: pid=3038600: Mon Nov 4 16:41:40 2024 00:30:13.765 read: IOPS=1018, BW=4075KiB/s (4173kB/s)(4140KiB/1016msec) 00:30:13.765 slat (nsec): min=6642, max=27529, avg=8324.41, stdev=2182.40 00:30:13.765 clat (usec): min=214, max=41996, avg=687.17, stdev=4199.14 00:30:13.765 lat (usec): min=222, max=42019, avg=695.50, stdev=4200.38 00:30:13.765 clat percentiles (usec): 00:30:13.765 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:30:13.765 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:30:13.765 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:30:13.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:30:13.765 | 99.99th=[42206] 00:30:13.765 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:30:13.765 slat (nsec): min=6329, max=47391, avg=11485.78, stdev=2546.67 00:30:13.765 clat (usec): min=143, max=341, avg=177.04, stdev=14.80 00:30:13.765 lat (usec): min=155, max=388, avg=188.53, stdev=15.72 00:30:13.765 clat percentiles (usec): 00:30:13.765 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:30:13.765 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:30:13.765 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:30:13.765 | 99.00th=[ 219], 99.50th=[ 249], 99.90th=[ 306], 99.95th=[ 343], 00:30:13.765 | 99.99th=[ 343] 00:30:13.765 bw ( KiB/s): min= 2672, max= 9616, per=22.53%, avg=6144.00, stdev=4910.15, samples=2 00:30:13.765 iops : min= 668, max= 2404, avg=1536.00, stdev=1227.54, samples=2 00:30:13.765 lat (usec) : 250=81.29%, 500=18.28% 00:30:13.765 lat (msec) : 50=0.43% 00:30:13.765 cpu : usr=1.18%, sys=2.76%, ctx=2571, majf=0, minf=1 00:30:13.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.765 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.765 00:30:13.765 Run status group 0 (all jobs): 00:30:13.765 READ: bw=21.0MiB/s (22.0MB/s), 4075KiB/s-8184KiB/s (4173kB/s-8380kB/s), io=21.7MiB (22.8MB), run=1001-1034msec 00:30:13.765 WRITE: bw=26.6MiB/s (27.9MB/s), 5942KiB/s-9758KiB/s (6085kB/s-9992kB/s), io=27.5MiB (28.9MB), run=1001-1034msec 00:30:13.765 00:30:13.765 Disk stats (read/write): 00:30:13.765 nvme0n1: ios=1076/1138, merge=0/0, ticks=804/186, in_queue=990, util=98.00% 00:30:13.765 nvme0n2: ios=1140/1536, merge=0/0, ticks=552/242, in_queue=794, util=87.20% 00:30:13.765 nvme0n3: ios=1857/2048, merge=0/0, ticks=651/331, in_queue=982, util=98.44% 00:30:13.765 nvme0n4: ios=1031/1536, merge=0/0, ticks=542/255, in_queue=797, util=89.71% 00:30:13.765 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:13.765 [global] 00:30:13.765 thread=1 00:30:13.765 invalidate=1 00:30:13.765 rw=randwrite 00:30:13.765 time_based=1 00:30:13.765 runtime=1 00:30:13.765 ioengine=libaio 00:30:13.765 direct=1 00:30:13.765 bs=4096 00:30:13.765 iodepth=1 00:30:13.765 norandommap=0 00:30:13.765 numjobs=1 00:30:13.765 00:30:13.765 verify_dump=1 00:30:13.765 verify_backlog=512 00:30:13.765 verify_state_save=0 00:30:13.765 do_verify=1 00:30:13.765 verify=crc32c-intel 00:30:13.765 [job0] 00:30:13.765 filename=/dev/nvme0n1 00:30:13.765 [job1] 00:30:13.765 filename=/dev/nvme0n2 00:30:13.765 [job2] 00:30:13.765 filename=/dev/nvme0n3 00:30:13.765 [job3] 00:30:13.765 filename=/dev/nvme0n4 00:30:13.765 Could not set queue depth (nvme0n1) 00:30:13.765 Could not set queue depth (nvme0n2) 00:30:13.765 Could not set queue depth (nvme0n3) 00:30:13.765 Could not set queue depth (nvme0n4) 00:30:13.765 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.765 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.765 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.765 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:13.765 fio-3.35 00:30:13.765 Starting 4 threads 00:30:15.135 00:30:15.135 job0: (groupid=0, jobs=1): err= 0: pid=3038989: Mon Nov 4 16:41:41 2024 00:30:15.135 read: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec) 00:30:15.135 slat (nsec): min=6927, max=23510, avg=8054.63, stdev=1250.59 00:30:15.135 clat (usec): min=233, max=613, avg=290.73, stdev=62.20 00:30:15.135 lat (usec): min=241, max=621, avg=298.79, stdev=62.25 00:30:15.135 clat percentiles (usec): 00:30:15.135 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:30:15.135 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 269], 60.00th=[ 273], 00:30:15.135 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 449], 95.00th=[ 461], 00:30:15.135 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 519], 99.95th=[ 611], 00:30:15.135 | 99.99th=[ 611] 00:30:15.135 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:15.135 slat (nsec): min=9769, max=45617, avg=10918.38, stdev=1639.76 00:30:15.135 clat (usec): min=151, max=361, avg=192.97, stdev=35.19 00:30:15.135 lat (usec): min=161, max=373, avg=203.89, stdev=35.40 00:30:15.135 clat percentiles (usec): 00:30:15.135 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:30:15.135 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:30:15.135 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 255], 95.00th=[ 289], 00:30:15.135 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 343], 00:30:15.135 | 99.99th=[ 363] 00:30:15.135 bw ( KiB/s): min= 8192, max= 8192, per=29.64%, avg=8192.00, stdev= 0.00, samples=1 00:30:15.135 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:15.135 lat (usec) : 250=49.42%, 500=50.51%, 750=0.08% 00:30:15.135 cpu : usr=3.80%, sys=5.70%, ctx=3958, majf=0, minf=1 00:30:15.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.135 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:15.135 job1: (groupid=0, jobs=1): err= 0: pid=3039003: Mon Nov 4 16:41:41 2024 00:30:15.135 read: IOPS=2000, BW=8004KiB/s (8196kB/s)(8012KiB/1001msec) 00:30:15.135 slat (nsec): min=4874, max=31231, avg=7811.92, stdev=2081.06 00:30:15.135 clat (usec): min=190, max=894, avg=255.38, stdev=32.91 00:30:15.135 lat (usec): min=199, max=904, avg=263.19, stdev=33.01 00:30:15.135 clat percentiles (usec): 00:30:15.135 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:30:15.135 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:30:15.135 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:30:15.135 | 99.00th=[ 351], 99.50th=[ 416], 99.90th=[ 506], 99.95th=[ 510], 00:30:15.135 | 99.99th=[ 898] 00:30:15.135 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:15.135 slat (usec): min=7, max=34962, avg=39.62, stdev=913.97 00:30:15.135 clat (usec): min=135, max=1117, avg=185.51, stdev=37.22 00:30:15.135 lat (usec): min=145, max=35288, avg=225.12, stdev=918.30 00:30:15.136 clat percentiles (usec): 00:30:15.136 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:30:15.136 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 184], 00:30:15.136 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 239], 95.00th=[ 243], 00:30:15.136 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 343], 00:30:15.136 | 99.99th=[ 1123] 00:30:15.136 bw ( KiB/s): min= 9608, max= 9608, per=34.77%, avg=9608.00, stdev= 0.00, samples=1 00:30:15.136 iops : min= 2402, max= 2402, avg=2402.00, stdev= 0.00, samples=1 00:30:15.136 lat (usec) : 250=73.44%, 500=26.46%, 750=0.05%, 1000=0.02% 00:30:15.136 lat (msec) : 2=0.02% 00:30:15.136 cpu : usr=3.60%, sys=6.00%, ctx=4054, majf=0, minf=1 00:30:15.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 issued rwts: total=2003,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:15.136 job2: (groupid=0, jobs=1): err= 0: pid=3039031: Mon Nov 4 16:41:41 2024 00:30:15.136 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:30:15.136 slat (nsec): min=10664, max=37137, avg=20166.45, stdev=6812.49 00:30:15.136 clat (usec): min=40819, max=41098, avg=40964.50, stdev=72.93 00:30:15.136 lat (usec): min=40842, max=41112, avg=40984.67, stdev=72.71 00:30:15.136 clat percentiles (usec): 00:30:15.136 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:15.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:15.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:15.136 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:15.136 | 99.99th=[41157] 00:30:15.136 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:30:15.136 slat (nsec): min=10925, max=55124, avg=13337.39, stdev=2866.27 00:30:15.136 clat (usec): min=160, max=642, avg=189.72, stdev=42.16 00:30:15.136 lat (usec): min=171, max=667, avg=203.06, stdev=43.03 00:30:15.136 clat percentiles (usec): 00:30:15.136 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:30:15.136 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:30:15.136 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 231], 95.00th=[ 239], 00:30:15.136 | 99.00th=[ 269], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 644], 00:30:15.136 | 99.99th=[ 644] 00:30:15.136 bw ( KiB/s): min= 4096, max= 4096, per=14.82%, avg=4096.00, stdev= 0.00, samples=1 00:30:15.136 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:15.136 lat (usec) : 250=94.01%, 500=1.12%, 750=0.75% 00:30:15.136 lat (msec) : 50=4.12% 00:30:15.136 cpu : usr=0.30%, sys=1.19%, ctx=535, majf=0, minf=1 00:30:15.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:15.136 job3: (groupid=0, jobs=1): err= 0: pid=3039042: Mon Nov 4 16:41:41 2024 00:30:15.136 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:15.136 slat (nsec): min=7572, max=23361, avg=8696.96, stdev=1148.79 00:30:15.136 clat (usec): min=209, max=2107, avg=254.52, stdev=62.99 00:30:15.136 lat (usec): min=217, max=2121, avg=263.21, stdev=63.08 00:30:15.136 clat percentiles (usec): 00:30:15.136 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:30:15.136 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:30:15.136 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 289], 00:30:15.136 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 685], 99.95th=[ 1827], 00:30:15.136 | 99.99th=[ 2114] 00:30:15.136 write: IOPS=2353, BW=9415KiB/s (9641kB/s)(9424KiB/1001msec); 0 zone resets 00:30:15.136 slat (nsec): min=10859, max=50443, avg=12133.15, stdev=1848.54 00:30:15.136 clat (usec): min=146, max=1846, avg=177.84, stdev=37.06 00:30:15.136 lat (usec): min=158, max=1859, avg=189.97, stdev=37.28 00:30:15.136 clat percentiles (usec): 00:30:15.136 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:30:15.136 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:30:15.136 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:30:15.136 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 297], 99.95th=[ 375], 00:30:15.136 | 99.99th=[ 1844] 00:30:15.136 bw ( KiB/s): min= 9184, max= 9184, per=33.23%, avg=9184.00, stdev= 0.00, samples=1 00:30:15.136 iops : min= 2296, max= 2296, avg=2296.00, stdev= 0.00, samples=1 00:30:15.136 lat (usec) : 250=83.88%, 500=15.92%, 750=0.14% 00:30:15.136 lat (msec) : 2=0.05%, 4=0.02% 00:30:15.136 cpu : usr=3.80%, sys=7.20%, ctx=4405, majf=0, minf=1 00:30:15.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.136 issued rwts: total=2048,2356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:15.136 00:30:15.136 Run status group 0 (all jobs): 00:30:15.136 READ: bw=23.2MiB/s (24.3MB/s), 87.3KiB/s-8184KiB/s (89.4kB/s-8380kB/s), io=23.4MiB (24.5MB), run=1001-1008msec 00:30:15.136 WRITE: bw=27.0MiB/s (28.3MB/s), 2032KiB/s-9415KiB/s (2081kB/s-9641kB/s), io=27.2MiB (28.5MB), run=1001-1008msec 00:30:15.136 00:30:15.136 Disk stats (read/write): 00:30:15.136 nvme0n1: ios=1585/1633, merge=0/0, ticks=449/303, in_queue=752, util=82.05% 00:30:15.136 nvme0n2: ios=1570/1615, merge=0/0, ticks=1206/291, in_queue=1497, util=99.59% 00:30:15.136 nvme0n3: ios=56/512, merge=0/0, ticks=1496/94, in_queue=1590, util=95.87% 00:30:15.136 nvme0n4: ios=1577/2025, merge=0/0, ticks=517/332, in_queue=849, util=97.12% 00:30:15.136 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:15.136 [global] 00:30:15.136 thread=1 00:30:15.136 invalidate=1 00:30:15.136 rw=write 00:30:15.136 time_based=1 00:30:15.136 runtime=1 00:30:15.136 ioengine=libaio 00:30:15.136 direct=1 00:30:15.136 bs=4096 00:30:15.136 iodepth=128 00:30:15.136 norandommap=0 00:30:15.136 numjobs=1 00:30:15.136 00:30:15.136 verify_dump=1 00:30:15.136 verify_backlog=512 00:30:15.136 verify_state_save=0 00:30:15.136 do_verify=1 00:30:15.136 verify=crc32c-intel 00:30:15.136 [job0] 00:30:15.136 filename=/dev/nvme0n1 00:30:15.136 [job1] 00:30:15.136 filename=/dev/nvme0n2 00:30:15.136 [job2] 00:30:15.136 filename=/dev/nvme0n3 00:30:15.136 [job3] 00:30:15.136 filename=/dev/nvme0n4 00:30:15.136 Could not set queue depth (nvme0n1) 00:30:15.136 Could not set queue depth (nvme0n2) 00:30:15.136 Could not set queue depth (nvme0n3) 00:30:15.136 Could not set queue depth (nvme0n4) 00:30:15.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:15.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:15.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:15.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:15.393 fio-3.35 00:30:15.393 Starting 4 threads 00:30:16.765 00:30:16.765 job0: (groupid=0, jobs=1): err= 0: pid=3039422: Mon Nov 4 16:41:43 2024 00:30:16.765 read: IOPS=6358, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1006msec) 00:30:16.765 slat (nsec): min=1357, max=9133.5k, avg=79716.59, stdev=635110.03 00:30:16.765 clat (usec): min=3043, max=20688, avg=10302.92, stdev=2548.23 00:30:16.765 lat (usec): min=3052, max=21830, avg=10382.64, stdev=2605.40 00:30:16.765 clat percentiles (usec): 00:30:16.765 | 1.00th=[ 5997], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8586], 00:30:16.765 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:30:16.765 | 70.00th=[10421], 80.00th=[11994], 90.00th=[14746], 95.00th=[16057], 00:30:16.765 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20055], 99.95th=[20579], 00:30:16.765 | 99.99th=[20579] 00:30:16.765 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:30:16.765 slat (usec): min=2, max=7948, avg=67.92, stdev=495.27 00:30:16.765 clat (usec): min=1394, max=20686, avg=9268.91, stdev=2188.72 00:30:16.765 lat (usec): min=1408, max=20691, avg=9336.83, stdev=2218.52 00:30:16.765 clat percentiles (usec): 00:30:16.765 | 1.00th=[ 4555], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 7439], 00:30:16.765 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:30:16.765 | 70.00th=[10028], 80.00th=[10159], 90.00th=[12649], 95.00th=[13173], 00:30:16.765 | 99.00th=[14091], 99.50th=[14091], 99.90th=[17695], 99.95th=[17957], 00:30:16.765 | 99.99th=[20579] 00:30:16.765 bw ( KiB/s): min=26568, max=26680, per=37.15%, avg=26624.00, stdev=79.20, samples=2 00:30:16.765 iops : min= 6642, max= 6670, avg=6656.00, stdev=19.80, samples=2 00:30:16.765 lat (msec) : 2=0.02%, 4=0.40%, 10=66.25%, 20=33.22%, 50=0.11% 00:30:16.765 cpu : usr=6.57%, sys=6.47%, ctx=454, majf=0, minf=1 00:30:16.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:30:16.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.765 issued rwts: total=6397,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.765 job1: (groupid=0, jobs=1): err= 0: pid=3039436: Mon Nov 4 16:41:43 2024 00:30:16.765 read: IOPS=5773, BW=22.6MiB/s (23.6MB/s)(23.5MiB/1042msec) 00:30:16.765 slat (nsec): min=1463, max=5686.7k, avg=81103.05, stdev=394947.26 00:30:16.765 clat (usec): min=394, max=59297, avg=11436.45, stdev=5613.25 00:30:16.765 lat (usec): min=402, max=59309, avg=11517.55, stdev=5604.42 00:30:16.765 clat percentiles (usec): 00:30:16.765 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:30:16.765 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:30:16.765 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[12649], 00:30:16.765 | 99.00th=[47449], 99.50th=[50070], 99.90th=[52167], 99.95th=[59507], 00:30:16.765 | 99.99th=[59507] 00:30:16.765 write: IOPS=5896, BW=23.0MiB/s (24.2MB/s)(24.0MiB/1042msec); 0 zone resets 00:30:16.765 slat (usec): min=2, max=8450, avg=78.89, stdev=389.92 00:30:16.765 clat (usec): min=1286, max=18168, avg=10317.99, stdev=983.86 00:30:16.765 lat (usec): min=1298, max=18202, avg=10396.87, stdev=970.06 00:30:16.765 clat percentiles (usec): 00:30:16.766 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:30:16.766 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:30:16.766 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11863], 00:30:16.766 | 99.00th=[12518], 99.50th=[13042], 99.90th=[15926], 99.95th=[17957], 00:30:16.766 | 99.99th=[18220] 00:30:16.766 bw ( KiB/s): min=24576, max=24576, per=34.29%, avg=24576.00, stdev= 0.00, samples=2 00:30:16.766 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:30:16.766 lat (usec) : 500=0.02%, 1000=0.09% 00:30:16.766 lat (msec) : 2=0.14%, 10=24.33%, 20=74.35%, 50=0.79%, 100=0.29% 00:30:16.766 cpu : usr=3.65%, sys=4.61%, ctx=699, majf=0, minf=2 00:30:16.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:16.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.766 issued rwts: total=6016,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.766 job2: (groupid=0, jobs=1): err= 0: pid=3039454: Mon Nov 4 16:41:43 2024 00:30:16.766 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:30:16.766 slat (nsec): min=1689, max=25966k, avg=274658.72, stdev=1841692.48 00:30:16.766 clat (usec): min=13757, max=87814, avg=37951.90, stdev=15744.59 00:30:16.766 lat (usec): min=13762, max=92812, avg=38226.56, stdev=15842.82 00:30:16.766 clat percentiles (usec): 00:30:16.766 | 1.00th=[13829], 5.00th=[14353], 10.00th=[14484], 20.00th=[22414], 00:30:16.766 | 30.00th=[29754], 40.00th=[33817], 50.00th=[37487], 60.00th=[41157], 00:30:16.766 | 70.00th=[43779], 80.00th=[52167], 90.00th=[60031], 95.00th=[64750], 00:30:16.766 | 99.00th=[78119], 99.50th=[80217], 99.90th=[87557], 99.95th=[87557], 00:30:16.766 | 99.99th=[87557] 00:30:16.766 write: IOPS=1760, BW=7044KiB/s (7213kB/s)(7100KiB/1008msec); 0 zone resets 00:30:16.766 slat (usec): min=2, max=27402, avg=321.35, stdev=1910.94 00:30:16.766 clat (msec): min=3, max=121, avg=39.05, stdev=26.61 00:30:16.766 lat (msec): min=6, max=121, avg=39.37, stdev=26.77 00:30:16.766 clat percentiles (msec): 00:30:16.766 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 22], 00:30:16.766 | 30.00th=[ 23], 40.00th=[ 27], 50.00th=[ 30], 60.00th=[ 38], 00:30:16.766 | 70.00th=[ 45], 80.00th=[ 51], 90.00th=[ 86], 95.00th=[ 102], 00:30:16.766 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:30:16.766 | 99.99th=[ 122] 00:30:16.766 bw ( KiB/s): min= 4984, max= 8192, per=9.19%, avg=6588.00, stdev=2268.40, samples=2 00:30:16.766 iops : min= 1246, max= 2048, avg=1647.00, stdev=567.10, samples=2 00:30:16.766 lat (msec) : 4=0.03%, 10=0.06%, 20=18.03%, 50=59.26%, 100=19.75% 00:30:16.766 lat (msec) : 250=2.87% 00:30:16.766 cpu : usr=1.19%, sys=2.98%, ctx=160, majf=0, minf=1 00:30:16.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:30:16.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.766 issued rwts: total=1536,1775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.766 job3: (groupid=0, jobs=1): err= 0: pid=3039460: Mon Nov 4 16:41:43 2024 00:30:16.766 read: IOPS=4030, BW=15.7MiB/s (16.5MB/s)(15.9MiB/1007msec) 00:30:16.766 slat (usec): min=2, max=13669, avg=122.90, stdev=898.51 00:30:16.766 clat (usec): min=4144, max=56130, avg=16017.95, stdev=5866.87 00:30:16.766 lat (usec): min=5826, max=56142, avg=16140.85, stdev=5940.25 00:30:16.766 clat percentiles (usec): 00:30:16.766 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[11863], 20.00th=[12387], 00:30:16.766 | 30.00th=[12649], 40.00th=[13304], 50.00th=[15008], 60.00th=[15795], 00:30:16.766 | 70.00th=[17171], 80.00th=[18482], 90.00th=[21103], 95.00th=[25035], 00:30:16.766 | 99.00th=[45876], 99.50th=[53216], 99.90th=[55837], 99.95th=[56361], 00:30:16.766 | 99.99th=[56361] 00:30:16.766 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:30:16.766 slat (usec): min=3, max=15094, avg=109.50, stdev=873.06 00:30:16.766 clat (usec): min=4317, max=56090, avg=14527.37, stdev=5168.97 00:30:16.766 lat (usec): min=4328, max=56095, avg=14636.87, stdev=5230.93 00:30:16.766 clat percentiles (usec): 00:30:16.766 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[10421], 20.00th=[11076], 00:30:16.766 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12780], 60.00th=[15139], 00:30:16.766 | 70.00th=[15926], 80.00th=[16909], 90.00th=[21365], 95.00th=[22676], 00:30:16.766 | 99.00th=[38011], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:30:16.766 | 99.99th=[55837] 00:30:16.766 bw ( KiB/s): min=16384, max=16384, per=22.86%, avg=16384.00, stdev= 0.00, samples=2 00:30:16.766 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:30:16.766 lat (msec) : 10=5.62%, 20=80.76%, 50=13.15%, 100=0.48% 00:30:16.766 cpu : usr=3.98%, sys=5.96%, ctx=207, majf=0, minf=1 00:30:16.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:16.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.766 issued rwts: total=4059,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.766 00:30:16.766 Run status group 0 (all jobs): 00:30:16.766 READ: bw=67.5MiB/s (70.8MB/s), 6095KiB/s-24.8MiB/s (6242kB/s-26.0MB/s), io=70.3MiB (73.8MB), run=1006-1042msec 00:30:16.766 WRITE: bw=70.0MiB/s (73.4MB/s), 7044KiB/s-25.8MiB/s (7213kB/s-27.1MB/s), io=72.9MiB (76.5MB), run=1006-1042msec 00:30:16.766 00:30:16.766 Disk stats (read/write): 00:30:16.766 nvme0n1: ios=5368/5632, merge=0/0, ticks=52820/50527, in_queue=103347, util=89.58% 00:30:16.766 nvme0n2: ios=5143/5252, merge=0/0, ticks=15564/16318, in_queue=31882, util=90.55% 00:30:16.766 nvme0n3: ios=1580/1543, merge=0/0, ticks=27140/24912, in_queue=52052, util=93.01% 00:30:16.766 nvme0n4: ios=3129/3401, merge=0/0, ticks=50685/50011, in_queue=100696, util=92.01% 00:30:16.766 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:16.766 [global] 00:30:16.766 thread=1 00:30:16.766 invalidate=1 00:30:16.766 rw=randwrite 00:30:16.766 time_based=1 00:30:16.766 runtime=1 00:30:16.766 ioengine=libaio 00:30:16.766 direct=1 00:30:16.766 bs=4096 00:30:16.766 iodepth=128 00:30:16.766 norandommap=0 00:30:16.766 numjobs=1 00:30:16.766 00:30:16.766 verify_dump=1 00:30:16.766 verify_backlog=512 00:30:16.766 verify_state_save=0 00:30:16.766 do_verify=1 00:30:16.766 verify=crc32c-intel 00:30:16.766 [job0] 00:30:16.766 filename=/dev/nvme0n1 00:30:16.766 [job1] 00:30:16.766 filename=/dev/nvme0n2 00:30:16.766 [job2] 00:30:16.766 filename=/dev/nvme0n3 00:30:16.766 [job3] 00:30:16.766 filename=/dev/nvme0n4 00:30:16.766 Could not set queue depth (nvme0n1) 00:30:16.766 Could not set queue depth (nvme0n2) 00:30:16.766 Could not set queue depth (nvme0n3) 00:30:16.766 Could not set queue depth (nvme0n4) 00:30:17.023 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:17.023 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:17.023 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:17.023 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:17.023 fio-3.35 00:30:17.023 Starting 4 threads 00:30:18.393 00:30:18.393 job0: (groupid=0, jobs=1): err= 0: pid=3039869: Mon Nov 4 16:41:44 2024 00:30:18.393 read: IOPS=3182, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:30:18.393 slat (nsec): min=1668, max=12264k, avg=126849.70, stdev=852488.53 00:30:18.393 clat (usec): min=2387, max=55791, avg=14856.21, stdev=7418.30 00:30:18.393 lat (usec): min=5482, max=55797, avg=14983.06, stdev=7494.62 00:30:18.393 clat percentiles (usec): 00:30:18.393 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11076], 00:30:18.393 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12780], 00:30:18.393 | 70.00th=[13960], 80.00th=[17433], 90.00th=[23725], 95.00th=[31327], 00:30:18.393 | 99.00th=[44827], 99.50th=[50070], 99.90th=[55837], 99.95th=[55837], 00:30:18.393 | 99.99th=[55837] 00:30:18.393 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:30:18.393 slat (usec): min=2, max=12327, avg=159.75, stdev=811.62 00:30:18.393 clat (usec): min=1730, max=55781, avg=22302.11, stdev=15309.58 00:30:18.393 lat (usec): min=1744, max=55787, avg=22461.86, stdev=15418.45 00:30:18.393 clat percentiles (usec): 00:30:18.393 | 1.00th=[ 5538], 5.00th=[ 5735], 10.00th=[ 8455], 20.00th=[ 9503], 00:30:18.393 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12780], 60.00th=[17957], 00:30:18.393 | 70.00th=[34866], 80.00th=[40109], 90.00th=[46924], 95.00th=[49546], 00:30:18.393 | 99.00th=[52167], 99.50th=[52167], 99.90th=[54789], 99.95th=[55837], 00:30:18.393 | 99.99th=[55837] 00:30:18.393 bw ( KiB/s): min= 9864, max=18792, per=21.69%, avg=14328.00, stdev=6313.05, samples=2 00:30:18.393 iops : min= 2466, max= 4698, avg=3582.00, stdev=1578.26, samples=2 00:30:18.393 lat (msec) : 2=0.03%, 4=0.10%, 10=19.42%, 20=53.08%, 50=25.66% 00:30:18.393 lat (msec) : 100=1.71% 00:30:18.393 cpu : usr=3.39%, sys=4.68%, ctx=291, majf=0, minf=1 00:30:18.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:30:18.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.393 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.393 job1: (groupid=0, jobs=1): err= 0: pid=3039880: Mon Nov 4 16:41:44 2024 00:30:18.393 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:30:18.393 slat (nsec): min=1276, max=21161k, avg=60638.75, stdev=525882.04 00:30:18.394 clat (usec): min=780, max=39950, avg=9526.01, stdev=4879.58 00:30:18.394 lat (usec): min=786, max=39958, avg=9586.64, stdev=4909.48 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[ 1827], 5.00th=[ 5080], 10.00th=[ 6128], 20.00th=[ 6915], 00:30:18.394 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8717], 00:30:18.394 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[13435], 95.00th=[20841], 00:30:18.394 | 99.00th=[33424], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:30:18.394 | 99.99th=[40109] 00:30:18.394 write: IOPS=7529, BW=29.4MiB/s (30.8MB/s)(29.6MiB/1007msec); 0 zone resets 00:30:18.394 slat (usec): min=2, max=13016, avg=55.05, stdev=429.02 00:30:18.394 clat (usec): min=239, max=29281, avg=7855.78, stdev=2882.70 00:30:18.394 lat (usec): min=250, max=29467, avg=7910.82, stdev=2912.97 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[ 1401], 5.00th=[ 3392], 10.00th=[ 4621], 20.00th=[ 6259], 00:30:18.394 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:30:18.394 | 70.00th=[ 8291], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[12387], 00:30:18.394 | 99.00th=[18482], 99.50th=[21365], 99.90th=[26346], 99.95th=[26346], 00:30:18.394 | 99.99th=[29230] 00:30:18.394 bw ( KiB/s): min=28672, max=30960, per=45.13%, avg=29816.00, stdev=1617.86, samples=2 00:30:18.394 iops : min= 7168, max= 7740, avg=7454.00, stdev=404.47, samples=2 00:30:18.394 lat (usec) : 250=0.01%, 750=0.07%, 1000=0.16% 00:30:18.394 lat (msec) : 2=1.72%, 4=2.66%, 10=76.30%, 20=15.90%, 50=3.20% 00:30:18.394 cpu : usr=5.47%, sys=7.65%, ctx=554, majf=0, minf=1 00:30:18.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:30:18.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.394 issued rwts: total=7168,7582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.394 job2: (groupid=0, jobs=1): err= 0: pid=3039896: Mon Nov 4 16:41:44 2024 00:30:18.394 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:30:18.394 slat (usec): min=4, max=33938, avg=237.22, stdev=1561.49 00:30:18.394 clat (usec): min=12460, max=87331, avg=30262.57, stdev=17411.54 00:30:18.394 lat (usec): min=16177, max=87339, avg=30499.79, stdev=17480.34 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[13960], 5.00th=[16450], 10.00th=[17171], 20.00th=[17695], 00:30:18.394 | 30.00th=[17957], 40.00th=[22414], 50.00th=[24773], 60.00th=[25297], 00:30:18.394 | 70.00th=[30802], 80.00th=[39060], 90.00th=[52167], 95.00th=[78119], 00:30:18.394 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:30:18.394 | 99.99th=[87557] 00:30:18.394 write: IOPS=2070, BW=8283KiB/s (8481kB/s)(8324KiB/1005msec); 0 zone resets 00:30:18.394 slat (usec): min=7, max=32493, avg=238.62, stdev=1540.92 00:30:18.394 clat (usec): min=3157, max=78695, avg=29108.68, stdev=14360.80 00:30:18.394 lat (usec): min=11713, max=78705, avg=29347.30, stdev=14408.10 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[11994], 5.00th=[16581], 10.00th=[17171], 20.00th=[18482], 00:30:18.394 | 30.00th=[20317], 40.00th=[22414], 50.00th=[24249], 60.00th=[24773], 00:30:18.394 | 70.00th=[25297], 80.00th=[46924], 90.00th=[54264], 95.00th=[56886], 00:30:18.394 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:30:18.394 | 99.99th=[79168] 00:30:18.394 bw ( KiB/s): min= 6408, max= 9976, per=12.40%, avg=8192.00, stdev=2522.96, samples=2 00:30:18.394 iops : min= 1602, max= 2494, avg=2048.00, stdev=630.74, samples=2 00:30:18.394 lat (msec) : 4=0.02%, 20=32.07%, 50=55.78%, 100=12.13% 00:30:18.394 cpu : usr=1.59%, sys=4.48%, ctx=132, majf=0, minf=1 00:30:18.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:18.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.394 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.394 job3: (groupid=0, jobs=1): err= 0: pid=3039901: Mon Nov 4 16:41:44 2024 00:30:18.394 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:30:18.394 slat (nsec): min=1575, max=15117k, avg=139651.97, stdev=945064.91 00:30:18.394 clat (usec): min=3573, max=81096, avg=18098.61, stdev=11198.03 00:30:18.394 lat (usec): min=3582, max=81105, avg=18238.26, stdev=11290.16 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[ 3949], 5.00th=[ 5342], 10.00th=[ 7570], 20.00th=[ 9241], 00:30:18.394 | 30.00th=[13435], 40.00th=[16450], 50.00th=[18220], 60.00th=[18482], 00:30:18.394 | 70.00th=[18744], 80.00th=[21627], 90.00th=[26084], 95.00th=[34866], 00:30:18.394 | 99.00th=[74974], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:30:18.394 | 99.99th=[81265] 00:30:18.394 write: IOPS=3365, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1006msec); 0 zone resets 00:30:18.394 slat (usec): min=2, max=15062, avg=133.00, stdev=826.30 00:30:18.394 clat (usec): min=742, max=81048, avg=21283.25, stdev=14547.97 00:30:18.394 lat (usec): min=750, max=81052, avg=21416.25, stdev=14618.27 00:30:18.394 clat percentiles (usec): 00:30:18.394 | 1.00th=[ 2040], 5.00th=[ 5342], 10.00th=[ 7504], 20.00th=[ 9110], 00:30:18.394 | 30.00th=[11076], 40.00th=[11994], 50.00th=[16909], 60.00th=[20317], 00:30:18.394 | 70.00th=[27919], 80.00th=[34866], 90.00th=[38536], 95.00th=[47449], 00:30:18.394 | 99.00th=[69731], 99.50th=[71828], 99.90th=[72877], 99.95th=[74974], 00:30:18.394 | 99.99th=[81265] 00:30:18.394 bw ( KiB/s): min= 9680, max=16384, per=19.72%, avg=13032.00, stdev=4740.44, samples=2 00:30:18.394 iops : min= 2420, max= 4096, avg=3258.00, stdev=1185.11, samples=2 00:30:18.394 lat (usec) : 750=0.05%, 1000=0.08% 00:30:18.394 lat (msec) : 2=0.22%, 4=1.77%, 10=18.69%, 20=46.90%, 50=28.99% 00:30:18.394 lat (msec) : 100=3.31% 00:30:18.394 cpu : usr=2.79%, sys=4.28%, ctx=297, majf=0, minf=1 00:30:18.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:18.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.394 issued rwts: total=3072,3386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.394 00:30:18.394 Run status group 0 (all jobs): 00:30:18.394 READ: bw=60.1MiB/s (63.0MB/s), 8151KiB/s-27.8MiB/s (8347kB/s-29.2MB/s), io=60.5MiB (63.4MB), run=1005-1007msec 00:30:18.394 WRITE: bw=64.5MiB/s (67.7MB/s), 8283KiB/s-29.4MiB/s (8481kB/s-30.8MB/s), io=65.0MiB (68.1MB), run=1005-1007msec 00:30:18.394 00:30:18.394 Disk stats (read/write): 00:30:18.394 nvme0n1: ios=3122/3143, merge=0/0, ticks=42748/60695, in_queue=103443, util=89.98% 00:30:18.394 nvme0n2: ios=6114/6144, merge=0/0, ticks=42601/36916, in_queue=79517, util=94.02% 00:30:18.394 nvme0n3: ios=1554/1696, merge=0/0, ticks=14088/13212, in_queue=27300, util=97.82% 00:30:18.394 nvme0n4: ios=2640/3072, merge=0/0, ticks=33765/38662, in_queue=72427, util=94.34% 00:30:18.394 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:18.394 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3039975 00:30:18.394 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:18.394 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:18.394 [global] 00:30:18.394 thread=1 00:30:18.394 invalidate=1 00:30:18.394 rw=read 00:30:18.394 time_based=1 00:30:18.394 runtime=10 00:30:18.394 ioengine=libaio 00:30:18.394 direct=1 00:30:18.394 bs=4096 00:30:18.394 iodepth=1 00:30:18.394 norandommap=1 00:30:18.394 numjobs=1 00:30:18.394 00:30:18.394 [job0] 00:30:18.394 filename=/dev/nvme0n1 00:30:18.394 [job1] 00:30:18.394 filename=/dev/nvme0n2 00:30:18.394 [job2] 00:30:18.394 filename=/dev/nvme0n3 00:30:18.394 [job3] 00:30:18.394 filename=/dev/nvme0n4 00:30:18.394 Could not set queue depth (nvme0n1) 00:30:18.394 Could not set queue depth (nvme0n2) 00:30:18.394 Could not set queue depth (nvme0n3) 00:30:18.394 Could not set queue depth (nvme0n4) 00:30:18.651 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.651 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.651 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.651 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.651 fio-3.35 00:30:18.651 Starting 4 threads 00:30:21.175 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:21.432 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43831296, buflen=4096 00:30:21.432 fio: pid=3040306, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:21.432 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:21.689 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:21.689 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:21.689 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=307200, buflen=4096 00:30:21.689 fio: pid=3040304, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:21.946 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47050752, buflen=4096 00:30:21.946 fio: pid=3040282, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:21.946 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:21.946 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:22.204 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.204 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:22.204 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=344064, buflen=4096 00:30:22.204 fio: pid=3040296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:22.204 00:30:22.204 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3040282: Mon Nov 4 16:41:48 2024 00:30:22.204 read: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(44.9MiB/3152msec) 00:30:22.204 slat (usec): min=7, max=15552, avg=13.41, stdev=241.53 00:30:22.204 clat (usec): min=190, max=532, avg=257.21, stdev=24.45 00:30:22.204 lat (usec): min=206, max=15971, avg=270.62, stdev=245.94 00:30:22.204 clat percentiles (usec): 00:30:22.204 | 1.00th=[ 206], 5.00th=[ 221], 10.00th=[ 239], 20.00th=[ 243], 00:30:22.204 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:30:22.204 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 297], 00:30:22.204 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 469], 99.95th=[ 482], 00:30:22.204 | 99.99th=[ 490] 00:30:22.204 bw ( KiB/s): min=13488, max=15520, per=55.76%, avg=14672.83, stdev=968.58, samples=6 00:30:22.204 iops : min= 3372, max= 3880, avg=3668.17, stdev=242.15, samples=6 00:30:22.204 lat (usec) : 250=51.69%, 500=48.29%, 750=0.01% 00:30:22.204 cpu : usr=1.87%, sys=6.44%, ctx=11494, majf=0, minf=1 00:30:22.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 issued rwts: total=11488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:22.204 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3040296: Mon Nov 4 16:41:48 2024 00:30:22.204 read: IOPS=25, BW=98.9KiB/s (101kB/s)(336KiB/3397msec) 00:30:22.204 slat (usec): min=8, max=10749, avg=150.97, stdev=1165.98 00:30:22.204 clat (usec): min=255, max=41204, avg=40020.89, stdev=6237.72 00:30:22.204 lat (usec): min=266, max=51944, avg=40172.83, stdev=6371.51 00:30:22.204 clat percentiles (usec): 00:30:22.204 | 1.00th=[ 255], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:22.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:22.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:22.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:22.204 | 99.99th=[41157] 00:30:22.204 bw ( KiB/s): min= 93, max= 104, per=0.38%, avg=99.50, stdev= 5.05, samples=6 00:30:22.204 iops : min= 23, max= 26, avg=24.83, stdev= 1.33, samples=6 00:30:22.204 lat (usec) : 500=2.35% 00:30:22.204 lat (msec) : 50=96.47% 00:30:22.204 cpu : usr=0.09%, sys=0.00%, ctx=88, majf=0, minf=2 00:30:22.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:22.204 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3040304: Mon Nov 4 16:41:48 2024 00:30:22.204 read: IOPS=25, BW=101KiB/s (103kB/s)(300KiB/2983msec) 00:30:22.204 slat (nsec): min=10223, max=31809, avg=24215.72, stdev=3201.00 00:30:22.204 clat (usec): min=363, max=41997, avg=39461.15, stdev=7958.15 00:30:22.204 lat (usec): min=389, max=42023, avg=39485.37, stdev=7958.87 00:30:22.204 clat percentiles (usec): 00:30:22.204 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:22.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:22.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:30:22.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:22.204 | 99.99th=[42206] 00:30:22.204 bw ( KiB/s): min= 96, max= 104, per=0.38%, avg=99.20, stdev= 4.38, samples=5 00:30:22.204 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:30:22.204 lat (usec) : 500=2.63% 00:30:22.204 lat (msec) : 2=1.32%, 50=94.74% 00:30:22.204 cpu : usr=0.13%, sys=0.00%, ctx=76, majf=0, minf=2 00:30:22.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:22.204 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3040306: Mon Nov 4 16:41:48 2024 00:30:22.204 read: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(41.8MiB/2732msec) 00:30:22.204 slat (nsec): min=6261, max=37985, avg=7540.14, stdev=861.31 00:30:22.204 clat (usec): min=186, max=499, avg=244.68, stdev=11.95 00:30:22.204 lat (usec): min=193, max=537, avg=252.22, stdev=12.01 00:30:22.204 clat percentiles (usec): 00:30:22.204 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:30:22.204 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:30:22.204 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 260], 00:30:22.204 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 437], 00:30:22.204 | 99.99th=[ 498] 00:30:22.204 bw ( KiB/s): min=15512, max=16376, per=60.23%, avg=15849.60, stdev=448.44, samples=5 00:30:22.204 iops : min= 3878, max= 4094, avg=3962.40, stdev=112.11, samples=5 00:30:22.204 lat (usec) : 250=69.22%, 500=30.77% 00:30:22.204 cpu : usr=1.17%, sys=3.59%, ctx=10702, majf=0, minf=2 00:30:22.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.204 issued rwts: total=10702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:22.204 00:30:22.204 Run status group 0 (all jobs): 00:30:22.204 READ: bw=25.7MiB/s (26.9MB/s), 98.9KiB/s-15.3MiB/s (101kB/s-16.0MB/s), io=87.3MiB (91.5MB), run=2732-3397msec 00:30:22.204 00:30:22.204 Disk stats (read/write): 00:30:22.204 nvme0n1: ios=11397/0, merge=0/0, ticks=3792/0, in_queue=3792, util=97.69% 00:30:22.204 nvme0n2: ios=83/0, merge=0/0, ticks=3322/0, in_queue=3322, util=96.06% 00:30:22.204 nvme0n3: ios=72/0, merge=0/0, ticks=2838/0, in_queue=2838, util=96.52% 00:30:22.204 nvme0n4: ios=10287/0, merge=0/0, ticks=2459/0, in_queue=2459, util=96.44% 00:30:22.462 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.462 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:22.462 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.462 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:22.719 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.719 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:22.976 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:22.976 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:23.233 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:23.233 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3039975 00:30:23.233 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:23.233 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:23.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:23.233 nvmf hotplug test: fio failed as expected 00:30:23.233 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.490 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.490 rmmod nvme_tcp 00:30:23.490 rmmod nvme_fabrics 00:30:23.490 rmmod nvme_keyring 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3037483 ']' 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3037483 ']' 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3037483' 00:30:23.748 killing process with pid 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3037483 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:23.748 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.749 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.277 00:30:26.277 real 0m25.390s 00:30:26.277 user 1m29.971s 00:30:26.277 sys 0m11.258s 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.277 ************************************ 00:30:26.277 END TEST nvmf_fio_target 00:30:26.277 ************************************ 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.277 ************************************ 00:30:26.277 START TEST nvmf_bdevio 00:30:26.277 ************************************ 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:26.277 * Looking for test storage... 00:30:26.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.277 --rc genhtml_branch_coverage=1 00:30:26.277 --rc genhtml_function_coverage=1 00:30:26.277 --rc genhtml_legend=1 00:30:26.277 --rc geninfo_all_blocks=1 00:30:26.277 --rc geninfo_unexecuted_blocks=1 00:30:26.277 00:30:26.277 ' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.277 --rc genhtml_branch_coverage=1 00:30:26.277 --rc genhtml_function_coverage=1 00:30:26.277 --rc genhtml_legend=1 00:30:26.277 --rc geninfo_all_blocks=1 00:30:26.277 --rc geninfo_unexecuted_blocks=1 00:30:26.277 00:30:26.277 ' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.277 --rc genhtml_branch_coverage=1 00:30:26.277 --rc genhtml_function_coverage=1 00:30:26.277 --rc genhtml_legend=1 00:30:26.277 --rc geninfo_all_blocks=1 00:30:26.277 --rc geninfo_unexecuted_blocks=1 00:30:26.277 00:30:26.277 ' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.277 --rc genhtml_branch_coverage=1 00:30:26.277 --rc genhtml_function_coverage=1 00:30:26.277 --rc genhtml_legend=1 00:30:26.277 --rc geninfo_all_blocks=1 00:30:26.277 --rc geninfo_unexecuted_blocks=1 00:30:26.277 00:30:26.277 ' 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.277 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.278 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.538 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:31.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:31.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:31.539 Found net devices under 0000:86:00.0: cvl_0_0 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:31.539 Found net devices under 0000:86:00.1: cvl_0_1 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.539 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:30:31.798 00:30:31.798 --- 10.0.0.2 ping statistics --- 00:30:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.798 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:31.798 00:30:31.798 --- 10.0.0.1 ping statistics --- 00:30:31.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.798 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3044543 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3044543 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3044543 ']' 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.798 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:31.798 [2024-11-04 16:41:58.586805] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:31.798 [2024-11-04 16:41:58.587712] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:30:31.798 [2024-11-04 16:41:58.587747] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.056 [2024-11-04 16:41:58.655864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.056 [2024-11-04 16:41:58.697210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.056 [2024-11-04 16:41:58.697248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.056 [2024-11-04 16:41:58.697255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.056 [2024-11-04 16:41:58.697261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.056 [2024-11-04 16:41:58.697266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.056 [2024-11-04 16:41:58.698887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:32.056 [2024-11-04 16:41:58.698992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.056 [2024-11-04 16:41:58.699098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.056 [2024-11-04 16:41:58.699099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.056 [2024-11-04 16:41:58.764306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.056 [2024-11-04 16:41:58.765200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:32.056 [2024-11-04 16:41:58.765224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.056 [2024-11-04 16:41:58.765517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:32.056 [2024-11-04 16:41:58.765560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.056 [2024-11-04 16:41:58.827808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.056 Malloc0 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.056 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.314 [2024-11-04 16:41:58.891811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:32.314 { 00:30:32.314 "params": { 00:30:32.314 "name": "Nvme$subsystem", 00:30:32.314 "trtype": "$TEST_TRANSPORT", 00:30:32.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.314 "adrfam": "ipv4", 00:30:32.314 "trsvcid": "$NVMF_PORT", 00:30:32.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.314 "hdgst": ${hdgst:-false}, 00:30:32.314 "ddgst": ${ddgst:-false} 00:30:32.314 }, 00:30:32.314 "method": "bdev_nvme_attach_controller" 00:30:32.314 } 00:30:32.314 EOF 00:30:32.314 )") 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:32.314 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:32.314 "params": { 00:30:32.314 "name": "Nvme1", 00:30:32.314 "trtype": "tcp", 00:30:32.314 "traddr": "10.0.0.2", 00:30:32.314 "adrfam": "ipv4", 00:30:32.314 "trsvcid": "4420", 00:30:32.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.314 "hdgst": false, 00:30:32.314 "ddgst": false 00:30:32.314 }, 00:30:32.314 "method": "bdev_nvme_attach_controller" 00:30:32.314 }' 00:30:32.314 [2024-11-04 16:41:58.942501] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:30:32.314 [2024-11-04 16:41:58.942544] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044568 ] 00:30:32.314 [2024-11-04 16:41:59.005825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:32.314 [2024-11-04 16:41:59.049691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.314 [2024-11-04 16:41:59.049790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.314 [2024-11-04 16:41:59.049792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.571 I/O targets: 00:30:32.571 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:32.571 00:30:32.571 00:30:32.571 CUnit - A unit testing framework for C - Version 2.1-3 00:30:32.571 http://cunit.sourceforge.net/ 00:30:32.571 00:30:32.571 00:30:32.571 Suite: bdevio tests on: Nvme1n1 00:30:32.571 Test: blockdev write read block ...passed 00:30:32.571 Test: blockdev write zeroes read block ...passed 00:30:32.571 Test: blockdev write zeroes read no split ...passed 00:30:32.571 Test: blockdev write zeroes read split ...passed 00:30:32.571 Test: blockdev write zeroes read split partial ...passed 00:30:32.571 Test: blockdev reset ...[2024-11-04 16:41:59.348450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:32.571 [2024-11-04 16:41:59.348517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1340 (9): Bad file descriptor 00:30:32.827 [2024-11-04 16:41:59.441534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:32.827 passed 00:30:32.827 Test: blockdev write read 8 blocks ...passed 00:30:32.827 Test: blockdev write read size > 128k ...passed 00:30:32.827 Test: blockdev write read invalid size ...passed 00:30:32.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:32.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:32.827 Test: blockdev write read max offset ...passed 00:30:32.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:32.827 Test: blockdev writev readv 8 blocks ...passed 00:30:32.827 Test: blockdev writev readv 30 x 1block ...passed 00:30:33.084 Test: blockdev writev readv block ...passed 00:30:33.084 Test: blockdev writev readv size > 128k ...passed 00:30:33.084 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:33.084 Test: blockdev comparev and writev ...[2024-11-04 16:41:59.692547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.692577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.692595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.692607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.692925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.692938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.692945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.693236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.693247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.693258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.693265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.693554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.693565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.693576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.084 [2024-11-04 16:41:59.693583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:33.084 passed 00:30:33.084 Test: blockdev nvme passthru rw ...passed 00:30:33.084 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:41:59.775966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.084 [2024-11-04 16:41:59.775983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.776104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.084 [2024-11-04 16:41:59.776113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.776224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.084 [2024-11-04 16:41:59.776232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:33.084 [2024-11-04 16:41:59.776345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.084 [2024-11-04 16:41:59.776354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:33.084 passed 00:30:33.084 Test: blockdev nvme admin passthru ...passed 00:30:33.084 Test: blockdev copy ...passed 00:30:33.084 00:30:33.084 Run Summary: Type Total Ran Passed Failed Inactive 00:30:33.084 suites 1 1 n/a 0 0 00:30:33.084 tests 23 23 23 0 0 00:30:33.084 asserts 152 152 152 0 n/a 00:30:33.084 00:30:33.084 Elapsed time = 1.259 seconds 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.341 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.341 rmmod nvme_tcp 00:30:33.341 rmmod nvme_fabrics 00:30:33.341 rmmod nvme_keyring 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3044543 ']' 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3044543 00:30:33.341 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3044543 ']' 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3044543 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044543 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044543' 00:30:33.342 killing process with pid 3044543 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3044543 00:30:33.342 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3044543 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.599 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.125 00:30:36.125 real 0m9.668s 00:30:36.125 user 0m8.703s 00:30:36.125 sys 0m4.985s 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:36.125 ************************************ 00:30:36.125 END TEST nvmf_bdevio 00:30:36.125 ************************************ 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:36.125 00:30:36.125 real 4m23.122s 00:30:36.125 user 9m1.792s 00:30:36.125 sys 1m45.431s 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.125 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.125 ************************************ 00:30:36.125 END TEST nvmf_target_core_interrupt_mode 00:30:36.125 ************************************ 00:30:36.125 16:42:02 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:36.125 16:42:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.125 16:42:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.125 16:42:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.125 ************************************ 00:30:36.125 START TEST nvmf_interrupt 00:30:36.125 ************************************ 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:36.125 * Looking for test storage... 00:30:36.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.125 --rc genhtml_branch_coverage=1 00:30:36.125 --rc genhtml_function_coverage=1 00:30:36.125 --rc genhtml_legend=1 00:30:36.125 --rc geninfo_all_blocks=1 00:30:36.125 --rc geninfo_unexecuted_blocks=1 00:30:36.125 00:30:36.125 ' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.125 --rc genhtml_branch_coverage=1 00:30:36.125 --rc genhtml_function_coverage=1 00:30:36.125 --rc genhtml_legend=1 00:30:36.125 --rc geninfo_all_blocks=1 00:30:36.125 --rc geninfo_unexecuted_blocks=1 00:30:36.125 00:30:36.125 ' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.125 --rc genhtml_branch_coverage=1 00:30:36.125 --rc genhtml_function_coverage=1 00:30:36.125 --rc genhtml_legend=1 00:30:36.125 --rc geninfo_all_blocks=1 00:30:36.125 --rc geninfo_unexecuted_blocks=1 00:30:36.125 00:30:36.125 ' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.125 --rc genhtml_branch_coverage=1 00:30:36.125 --rc genhtml_function_coverage=1 00:30:36.125 --rc genhtml_legend=1 00:30:36.125 --rc geninfo_all_blocks=1 00:30:36.125 --rc geninfo_unexecuted_blocks=1 00:30:36.125 00:30:36.125 ' 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:36.125 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.126 16:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:41.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:41.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:41.383 Found net devices under 0000:86:00.0: cvl_0_0 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:41.383 Found net devices under 0000:86:00.1: cvl_0_1 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.383 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.384 16:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:30:41.384 00:30:41.384 --- 10.0.0.2 ping statistics --- 00:30:41.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.384 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:30:41.384 00:30:41.384 --- 10.0.0.1 ping statistics --- 00:30:41.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.384 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.384 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3048503 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3048503 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3048503 ']' 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.641 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.642 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.642 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.642 [2024-11-04 16:42:08.299839] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.642 [2024-11-04 16:42:08.300822] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:30:41.642 [2024-11-04 16:42:08.300866] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.642 [2024-11-04 16:42:08.369725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:41.642 [2024-11-04 16:42:08.409385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.642 [2024-11-04 16:42:08.409420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.642 [2024-11-04 16:42:08.409428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.642 [2024-11-04 16:42:08.409435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.642 [2024-11-04 16:42:08.409441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.642 [2024-11-04 16:42:08.410628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.642 [2024-11-04 16:42:08.410630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.900 [2024-11-04 16:42:08.477279] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.900 [2024-11-04 16:42:08.477613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.900 [2024-11-04 16:42:08.477637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:41.900 5000+0 records in 00:30:41.900 5000+0 records out 00:30:41.900 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179509 s, 570 MB/s 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 AIO0 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 [2024-11-04 16:42:08.603213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.900 [2024-11-04 16:42:08.627481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3048503 0 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 0 idle 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:41.900 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048503 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048503 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3048503 1 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 1 idle 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:42.158 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:42.416 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048571 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:30:42.416 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048571 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:30:42.416 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.416 16:42:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3048737 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3048503 0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3048503 0 busy 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048503 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.45 reactor_0' 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048503 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.45 reactor_0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3048503 1 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3048503 1 busy 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:42.416 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048571 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.30 reactor_1' 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048571 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.30 reactor_1 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.673 16:42:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3048737 00:30:52.972 Initializing NVMe Controllers 00:30:52.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.972 Controller IO queue size 256, less than required. 00:30:52.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:52.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:52.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:52.972 Initialization complete. Launching workers. 00:30:52.972 ======================================================== 00:30:52.972 Latency(us) 00:30:52.972 Device Information : IOPS MiB/s Average min max 00:30:52.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16596.90 64.83 15433.24 2741.31 20667.22 00:30:52.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16484.80 64.39 15537.12 4508.22 20459.80 00:30:52.972 ======================================================== 00:30:52.972 Total : 33081.69 129.23 15485.00 2741.31 20667.22 00:30:52.972 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3048503 0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 0 idle 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048503 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0' 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048503 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3048503 1 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 1 idle 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048571 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:30:52.972 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048571 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:52.973 16:42:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3048503 0 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 0 idle 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048503 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.37 reactor_0' 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048503 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.37 reactor_0 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.501 16:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3048503 1 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3048503 1 idle 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3048503 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3048503 -w 256 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3048571 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.06 reactor_1' 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3048571 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.06 reactor_1 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:55.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.501 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.759 rmmod nvme_tcp 00:30:55.759 rmmod nvme_fabrics 00:30:55.759 rmmod nvme_keyring 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3048503 ']' 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3048503 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3048503 ']' 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3048503 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3048503 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3048503' 00:30:55.759 killing process with pid 3048503 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3048503 00:30:55.759 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3048503 00:30:56.016 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.017 16:42:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.923 16:42:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.923 00:30:57.923 real 0m22.235s 00:30:57.923 user 0m39.531s 00:30:57.923 sys 0m7.983s 00:30:57.923 16:42:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.923 16:42:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 ************************************ 00:30:57.923 END TEST nvmf_interrupt 00:30:57.923 ************************************ 00:30:57.923 00:30:57.923 real 26m39.307s 00:30:57.923 user 55m36.445s 00:30:57.923 sys 8m54.258s 00:30:57.923 16:42:24 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.923 16:42:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 ************************************ 00:30:57.923 END TEST nvmf_tcp 00:30:57.923 ************************************ 00:30:58.181 16:42:24 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:58.181 16:42:24 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:58.181 16:42:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.181 16:42:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.181 16:42:24 -- common/autotest_common.sh@10 -- # set +x 00:30:58.181 ************************************ 00:30:58.181 START TEST spdkcli_nvmf_tcp 00:30:58.181 ************************************ 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:58.181 * Looking for test storage... 00:30:58.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.181 --rc genhtml_branch_coverage=1 00:30:58.181 --rc genhtml_function_coverage=1 00:30:58.181 --rc genhtml_legend=1 00:30:58.181 --rc geninfo_all_blocks=1 00:30:58.181 --rc geninfo_unexecuted_blocks=1 00:30:58.181 00:30:58.181 ' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.181 --rc genhtml_branch_coverage=1 00:30:58.181 --rc genhtml_function_coverage=1 00:30:58.181 --rc genhtml_legend=1 00:30:58.181 --rc geninfo_all_blocks=1 00:30:58.181 --rc geninfo_unexecuted_blocks=1 00:30:58.181 00:30:58.181 ' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.181 --rc genhtml_branch_coverage=1 00:30:58.181 --rc genhtml_function_coverage=1 00:30:58.181 --rc genhtml_legend=1 00:30:58.181 --rc geninfo_all_blocks=1 00:30:58.181 --rc geninfo_unexecuted_blocks=1 00:30:58.181 00:30:58.181 ' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.181 --rc genhtml_branch_coverage=1 00:30:58.181 --rc genhtml_function_coverage=1 00:30:58.181 --rc genhtml_legend=1 00:30:58.181 --rc geninfo_all_blocks=1 00:30:58.181 --rc geninfo_unexecuted_blocks=1 00:30:58.181 00:30:58.181 ' 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.181 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.182 16:42:24 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.182 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:58.182 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:58.182 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:58.182 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3051578 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3051578 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3051578 ']' 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:58.439 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.440 [2024-11-04 16:42:25.059904] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:30:58.440 [2024-11-04 16:42:25.059946] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051578 ] 00:30:58.440 [2024-11-04 16:42:25.122369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:58.440 [2024-11-04 16:42:25.166256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.440 [2024-11-04 16:42:25.166259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.440 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.697 16:42:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:58.697 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:58.697 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:58.697 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:58.697 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:58.697 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:58.697 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:58.697 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.697 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.697 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:58.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:58.697 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:58.697 ' 00:31:01.221 [2024-11-04 16:42:27.782739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.591 [2024-11-04 16:42:29.002845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:04.486 [2024-11-04 16:42:31.249818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:06.378 [2024-11-04 16:42:33.179705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:08.271 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:08.271 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:08.271 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.271 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.271 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:08.271 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:08.271 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:08.271 16:42:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.529 16:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:08.529 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:08.529 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:08.529 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:08.529 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:08.529 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:08.529 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:08.529 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:08.529 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:08.529 ' 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:13.785 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:13.785 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:13.785 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:13.785 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3051578 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3051578 ']' 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3051578 00:31:13.785 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051578 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051578' 00:31:13.786 killing process with pid 3051578 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3051578 00:31:13.786 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3051578 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3051578 ']' 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3051578 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3051578 ']' 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3051578 00:31:14.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3051578) - No such process 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3051578 is not found' 00:31:14.043 Process with pid 3051578 is not found 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:14.043 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:14.044 16:42:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:14.044 00:31:14.044 real 0m15.858s 00:31:14.044 user 0m32.968s 00:31:14.044 sys 0m0.729s 00:31:14.044 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.044 16:42:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:14.044 ************************************ 00:31:14.044 END TEST spdkcli_nvmf_tcp 00:31:14.044 ************************************ 00:31:14.044 16:42:40 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:14.044 16:42:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:14.044 16:42:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.044 16:42:40 -- common/autotest_common.sh@10 -- # set +x 00:31:14.044 ************************************ 00:31:14.044 START TEST nvmf_identify_passthru 00:31:14.044 ************************************ 00:31:14.044 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:14.044 * Looking for test storage... 00:31:14.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.044 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.044 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.044 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.302 16:42:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.302 --rc genhtml_branch_coverage=1 00:31:14.302 --rc genhtml_function_coverage=1 00:31:14.302 --rc genhtml_legend=1 00:31:14.302 --rc geninfo_all_blocks=1 00:31:14.302 --rc geninfo_unexecuted_blocks=1 00:31:14.302 00:31:14.302 ' 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.302 --rc genhtml_branch_coverage=1 00:31:14.302 --rc genhtml_function_coverage=1 00:31:14.302 --rc genhtml_legend=1 00:31:14.302 --rc geninfo_all_blocks=1 00:31:14.302 --rc geninfo_unexecuted_blocks=1 00:31:14.302 00:31:14.302 ' 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.302 --rc genhtml_branch_coverage=1 00:31:14.302 --rc genhtml_function_coverage=1 00:31:14.302 --rc genhtml_legend=1 00:31:14.302 --rc geninfo_all_blocks=1 00:31:14.302 --rc geninfo_unexecuted_blocks=1 00:31:14.302 00:31:14.302 ' 00:31:14.302 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.302 --rc genhtml_branch_coverage=1 00:31:14.302 --rc genhtml_function_coverage=1 00:31:14.302 --rc genhtml_legend=1 00:31:14.302 --rc geninfo_all_blocks=1 00:31:14.302 --rc geninfo_unexecuted_blocks=1 00:31:14.302 00:31:14.302 ' 00:31:14.302 16:42:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.302 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:14.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.303 16:42:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.303 16:42:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:14.303 16:42:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.303 16:42:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.303 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:14.303 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.303 16:42:40 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.303 16:42:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:19.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:19.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:19.583 Found net devices under 0000:86:00.0: cvl_0_0 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:19.583 Found net devices under 0000:86:00.1: cvl_0_1 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.583 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.584 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:31:19.842 00:31:19.842 --- 10.0.0.2 ping statistics --- 00:31:19.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.842 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:19.842 00:31:19.842 --- 10.0.0.1 ping statistics --- 00:31:19.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.842 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.842 16:42:46 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:19.842 16:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:19.842 16:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:25.099 16:42:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:31:25.099 16:42:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:25.099 16:42:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:25.099 16:42:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:29.280 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:29.280 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:29.280 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.280 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.538 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.538 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3058822 00:31:29.538 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:29.538 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.538 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3058822 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3058822 ']' 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:29.538 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.538 [2024-11-04 16:42:56.167945] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:31:29.538 [2024-11-04 16:42:56.167990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.538 [2024-11-04 16:42:56.235558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.538 [2024-11-04 16:42:56.278365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.538 [2024-11-04 16:42:56.278403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.539 [2024-11-04 16:42:56.278411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.539 [2024-11-04 16:42:56.278417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.539 [2024-11-04 16:42:56.278422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.539 [2024-11-04 16:42:56.280051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.539 [2024-11-04 16:42:56.280152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.539 [2024-11-04 16:42:56.280238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.539 [2024-11-04 16:42:56.280239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:29.539 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.539 INFO: Log level set to 20 00:31:29.539 INFO: Requests: 00:31:29.539 { 00:31:29.539 "jsonrpc": "2.0", 00:31:29.539 "method": "nvmf_set_config", 00:31:29.539 "id": 1, 00:31:29.539 "params": { 00:31:29.539 "admin_cmd_passthru": { 00:31:29.539 "identify_ctrlr": true 00:31:29.539 } 00:31:29.539 } 00:31:29.539 } 00:31:29.539 00:31:29.539 INFO: response: 00:31:29.539 { 00:31:29.539 "jsonrpc": "2.0", 00:31:29.539 "id": 1, 00:31:29.539 "result": true 00:31:29.539 } 00:31:29.539 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.539 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.539 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.539 INFO: Setting log level to 20 00:31:29.539 INFO: Setting log level to 20 00:31:29.539 INFO: Log level set to 20 00:31:29.539 INFO: Log level set to 20 00:31:29.539 INFO: Requests: 00:31:29.539 { 00:31:29.539 "jsonrpc": "2.0", 00:31:29.539 "method": "framework_start_init", 00:31:29.539 "id": 1 00:31:29.539 } 00:31:29.539 00:31:29.539 INFO: Requests: 00:31:29.539 { 00:31:29.539 "jsonrpc": "2.0", 00:31:29.539 "method": "framework_start_init", 00:31:29.539 "id": 1 00:31:29.539 } 00:31:29.539 00:31:29.797 [2024-11-04 16:42:56.391694] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:29.797 INFO: response: 00:31:29.797 { 00:31:29.797 "jsonrpc": "2.0", 00:31:29.797 "id": 1, 00:31:29.797 "result": true 00:31:29.797 } 00:31:29.797 00:31:29.797 INFO: response: 00:31:29.797 { 00:31:29.797 "jsonrpc": "2.0", 00:31:29.797 "id": 1, 00:31:29.797 "result": true 00:31:29.797 } 00:31:29.797 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.797 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.797 INFO: Setting log level to 40 00:31:29.797 INFO: Setting log level to 40 00:31:29.797 INFO: Setting log level to 40 00:31:29.797 [2024-11-04 16:42:56.405025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.797 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.797 16:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.797 16:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.070 Nvme0n1 00:31:33.070 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.070 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:33.070 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.070 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.070 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.070 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.071 [2024-11-04 16:42:59.310724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.071 [ 00:31:33.071 { 00:31:33.071 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:33.071 "subtype": "Discovery", 00:31:33.071 "listen_addresses": [], 00:31:33.071 "allow_any_host": true, 00:31:33.071 "hosts": [] 00:31:33.071 }, 00:31:33.071 { 00:31:33.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.071 "subtype": "NVMe", 00:31:33.071 "listen_addresses": [ 00:31:33.071 { 00:31:33.071 "trtype": "TCP", 00:31:33.071 "adrfam": "IPv4", 00:31:33.071 "traddr": "10.0.0.2", 00:31:33.071 "trsvcid": "4420" 00:31:33.071 } 00:31:33.071 ], 00:31:33.071 "allow_any_host": true, 00:31:33.071 "hosts": [], 00:31:33.071 "serial_number": "SPDK00000000000001", 00:31:33.071 "model_number": "SPDK bdev Controller", 00:31:33.071 "max_namespaces": 1, 00:31:33.071 "min_cntlid": 1, 00:31:33.071 "max_cntlid": 65519, 00:31:33.071 "namespaces": [ 00:31:33.071 { 00:31:33.071 "nsid": 1, 00:31:33.071 "bdev_name": "Nvme0n1", 00:31:33.071 "name": "Nvme0n1", 00:31:33.071 "nguid": "1759946E845E4F70B3C9442CD38C1308", 00:31:33.071 "uuid": "1759946e-845e-4f70-b3c9-442cd38c1308" 00:31:33.071 } 00:31:33.071 ] 00:31:33.071 } 00:31:33.071 ] 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:33.071 16:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.071 rmmod nvme_tcp 00:31:33.071 rmmod nvme_fabrics 00:31:33.071 rmmod nvme_keyring 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3058822 ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3058822 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3058822 ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3058822 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058822 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058822' 00:31:33.071 killing process with pid 3058822 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3058822 00:31:33.071 16:42:59 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3058822 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.965 16:43:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.965 16:43:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:34.965 16:43:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.499 16:43:03 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.499 00:31:37.499 real 0m22.984s 00:31:37.499 user 0m29.007s 00:31:37.499 sys 0m6.110s 00:31:37.499 16:43:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.499 16:43:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.499 ************************************ 00:31:37.499 END TEST nvmf_identify_passthru 00:31:37.499 ************************************ 00:31:37.499 16:43:03 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:37.499 16:43:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:37.499 16:43:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.499 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:31:37.499 ************************************ 00:31:37.499 START TEST nvmf_dif 00:31:37.499 ************************************ 00:31:37.499 16:43:03 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:37.499 * Looking for test storage... 00:31:37.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.499 16:43:03 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:37.499 16:43:03 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:37.499 16:43:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:31:37.499 16:43:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.499 16:43:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.500 --rc genhtml_branch_coverage=1 00:31:37.500 --rc genhtml_function_coverage=1 00:31:37.500 --rc genhtml_legend=1 00:31:37.500 --rc geninfo_all_blocks=1 00:31:37.500 --rc geninfo_unexecuted_blocks=1 00:31:37.500 00:31:37.500 ' 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.500 --rc genhtml_branch_coverage=1 00:31:37.500 --rc genhtml_function_coverage=1 00:31:37.500 --rc genhtml_legend=1 00:31:37.500 --rc geninfo_all_blocks=1 00:31:37.500 --rc geninfo_unexecuted_blocks=1 00:31:37.500 00:31:37.500 ' 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.500 --rc genhtml_branch_coverage=1 00:31:37.500 --rc genhtml_function_coverage=1 00:31:37.500 --rc genhtml_legend=1 00:31:37.500 --rc geninfo_all_blocks=1 00:31:37.500 --rc geninfo_unexecuted_blocks=1 00:31:37.500 00:31:37.500 ' 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.500 --rc genhtml_branch_coverage=1 00:31:37.500 --rc genhtml_function_coverage=1 00:31:37.500 --rc genhtml_legend=1 00:31:37.500 --rc geninfo_all_blocks=1 00:31:37.500 --rc geninfo_unexecuted_blocks=1 00:31:37.500 00:31:37.500 ' 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.500 16:43:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.500 16:43:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.500 16:43:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.500 16:43:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.500 16:43:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.500 16:43:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.500 16:43:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.500 16:43:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:37.500 16:43:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:37.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:37.500 16:43:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.500 16:43:03 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.500 16:43:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.837 16:43:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:42.838 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:42.838 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:42.838 Found net devices under 0000:86:00.0: cvl_0_0 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:42.838 Found net devices under 0000:86:00.1: cvl_0_1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:31:42.838 00:31:42.838 --- 10.0.0.2 ping statistics --- 00:31:42.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.838 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:31:42.838 00:31:42.838 --- 10.0.0.1 ping statistics --- 00:31:42.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.838 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:42.838 16:43:09 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:45.371 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:45.371 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:45.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.371 16:43:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:45.371 16:43:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.371 16:43:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.371 16:43:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3064290 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:45.371 16:43:11 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3064290 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3064290 ']' 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.372 16:43:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.372 [2024-11-04 16:43:12.036724] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:31:45.372 [2024-11-04 16:43:12.036765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.372 [2024-11-04 16:43:12.100258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.372 [2024-11-04 16:43:12.138507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.372 [2024-11-04 16:43:12.138540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.372 [2024-11-04 16:43:12.138546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.372 [2024-11-04 16:43:12.138553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.372 [2024-11-04 16:43:12.138559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.372 [2024-11-04 16:43:12.139135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:45.631 16:43:12 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.631 16:43:12 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.631 16:43:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:45.631 16:43:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.631 [2024-11-04 16:43:12.281893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.631 16:43:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.631 16:43:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:45.631 ************************************ 00:31:45.631 START TEST fio_dif_1_default 00:31:45.631 ************************************ 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:45.631 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:45.632 bdev_null0 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:45.632 [2024-11-04 16:43:12.358220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.632 { 00:31:45.632 "params": { 00:31:45.632 "name": "Nvme$subsystem", 00:31:45.632 "trtype": "$TEST_TRANSPORT", 00:31:45.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.632 "adrfam": "ipv4", 00:31:45.632 "trsvcid": "$NVMF_PORT", 00:31:45.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.632 "hdgst": ${hdgst:-false}, 00:31:45.632 "ddgst": ${ddgst:-false} 00:31:45.632 }, 00:31:45.632 "method": "bdev_nvme_attach_controller" 00:31:45.632 } 00:31:45.632 EOF 00:31:45.632 )") 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.632 "params": { 00:31:45.632 "name": "Nvme0", 00:31:45.632 "trtype": "tcp", 00:31:45.632 "traddr": "10.0.0.2", 00:31:45.632 "adrfam": "ipv4", 00:31:45.632 "trsvcid": "4420", 00:31:45.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.632 "hdgst": false, 00:31:45.632 "ddgst": false 00:31:45.632 }, 00:31:45.632 "method": "bdev_nvme_attach_controller" 00:31:45.632 }' 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:45.632 16:43:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.197 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:46.197 fio-3.35 00:31:46.197 Starting 1 thread 00:31:58.533 00:31:58.533 filename0: (groupid=0, jobs=1): err= 0: pid=3064664: Mon Nov 4 16:43:23 2024 00:31:58.533 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:31:58.533 slat (nsec): min=5688, max=32863, avg=6209.60, stdev=1751.68 00:31:58.533 clat (usec): min=40847, max=45481, avg=41034.08, stdev=337.61 00:31:58.533 lat (usec): min=40852, max=45514, avg=41040.29, stdev=338.16 00:31:58.533 clat percentiles (usec): 00:31:58.533 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:58.533 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:58.533 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:58.533 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:31:58.533 | 99.99th=[45351] 00:31:58.533 bw ( KiB/s): min= 384, max= 416, per=99.55%, avg=388.80, stdev=11.72, samples=20 00:31:58.533 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:58.533 lat (msec) : 50=100.00% 00:31:58.533 cpu : usr=92.12%, sys=7.63%, ctx=60, majf=0, minf=0 00:31:58.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.533 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.533 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:58.533 00:31:58.533 Run status group 0 (all jobs): 00:31:58.533 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10017-10017msec 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.533 00:31:58.533 real 0m11.375s 00:31:58.533 user 0m15.702s 00:31:58.533 sys 0m1.112s 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.533 ************************************ 00:31:58.533 END TEST fio_dif_1_default 00:31:58.533 ************************************ 00:31:58.533 16:43:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:58.533 16:43:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.533 16:43:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.533 16:43:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.533 ************************************ 00:31:58.533 START TEST fio_dif_1_multi_subsystems 00:31:58.533 ************************************ 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:58.533 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 bdev_null0 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 [2024-11-04 16:43:23.801440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 bdev_null1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:58.534 { 00:31:58.534 "params": { 00:31:58.534 "name": "Nvme$subsystem", 00:31:58.534 "trtype": "$TEST_TRANSPORT", 00:31:58.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.534 "adrfam": "ipv4", 00:31:58.534 "trsvcid": "$NVMF_PORT", 00:31:58.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.534 "hdgst": ${hdgst:-false}, 00:31:58.534 "ddgst": ${ddgst:-false} 00:31:58.534 }, 00:31:58.534 "method": "bdev_nvme_attach_controller" 00:31:58.534 } 00:31:58.534 EOF 00:31:58.534 )") 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:58.534 { 00:31:58.534 "params": { 00:31:58.534 "name": "Nvme$subsystem", 00:31:58.534 "trtype": "$TEST_TRANSPORT", 00:31:58.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.534 "adrfam": "ipv4", 00:31:58.534 "trsvcid": "$NVMF_PORT", 00:31:58.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.534 "hdgst": ${hdgst:-false}, 00:31:58.534 "ddgst": ${ddgst:-false} 00:31:58.534 }, 00:31:58.534 "method": "bdev_nvme_attach_controller" 00:31:58.534 } 00:31:58.534 EOF 00:31:58.534 )") 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:58.534 "params": { 00:31:58.534 "name": "Nvme0", 00:31:58.534 "trtype": "tcp", 00:31:58.534 "traddr": "10.0.0.2", 00:31:58.534 "adrfam": "ipv4", 00:31:58.534 "trsvcid": "4420", 00:31:58.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.534 "hdgst": false, 00:31:58.534 "ddgst": false 00:31:58.534 }, 00:31:58.534 "method": "bdev_nvme_attach_controller" 00:31:58.534 },{ 00:31:58.534 "params": { 00:31:58.534 "name": "Nvme1", 00:31:58.534 "trtype": "tcp", 00:31:58.534 "traddr": "10.0.0.2", 00:31:58.534 "adrfam": "ipv4", 00:31:58.534 "trsvcid": "4420", 00:31:58.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:58.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:58.534 "hdgst": false, 00:31:58.534 "ddgst": false 00:31:58.534 }, 00:31:58.534 "method": "bdev_nvme_attach_controller" 00:31:58.534 }' 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:58.534 16:43:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.535 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:58.535 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:58.535 fio-3.35 00:31:58.535 Starting 2 threads 00:32:08.501 00:32:08.501 filename0: (groupid=0, jobs=1): err= 0: pid=3066632: Mon Nov 4 16:43:35 2024 00:32:08.501 read: IOPS=191, BW=764KiB/s (783kB/s)(7664KiB/10027msec) 00:32:08.501 slat (nsec): min=5769, max=39388, avg=6800.73, stdev=1902.99 00:32:08.501 clat (usec): min=385, max=42866, avg=20913.47, stdev=20510.41 00:32:08.501 lat (usec): min=391, max=42905, avg=20920.27, stdev=20509.89 00:32:08.501 clat percentiles (usec): 00:32:08.501 | 1.00th=[ 420], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 445], 00:32:08.501 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 660], 60.00th=[40633], 00:32:08.501 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:32:08.501 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:08.501 | 99.99th=[42730] 00:32:08.501 bw ( KiB/s): min= 702, max= 832, per=50.08%, avg=764.70, stdev=25.47, samples=20 00:32:08.501 iops : min= 175, max= 208, avg=191.15, stdev= 6.43, samples=20 00:32:08.501 lat (usec) : 500=46.19%, 750=3.91% 00:32:08.501 lat (msec) : 50=49.90% 00:32:08.501 cpu : usr=96.75%, sys=3.01%, ctx=15, majf=0, minf=98 00:32:08.501 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.501 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.501 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:08.501 filename1: (groupid=0, jobs=1): err= 0: pid=3066633: Mon Nov 4 16:43:35 2024 00:32:08.501 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10008msec) 00:32:08.501 slat (nsec): min=5756, max=39727, avg=6869.62, stdev=1951.27 00:32:08.501 clat (usec): min=423, max=42583, avg=20961.42, stdev=20499.22 00:32:08.501 lat (usec): min=428, max=42589, avg=20968.29, stdev=20498.65 00:32:08.501 clat percentiles (usec): 00:32:08.501 | 1.00th=[ 433], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 474], 00:32:08.501 | 30.00th=[ 486], 40.00th=[ 545], 50.00th=[ 1352], 60.00th=[41157], 00:32:08.501 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:08.501 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:08.501 | 99.99th=[42730] 00:32:08.501 bw ( KiB/s): min= 704, max= 768, per=49.89%, avg=761.50, stdev=19.67, samples=20 00:32:08.501 iops : min= 176, max= 192, avg=190.35, stdev= 4.91, samples=20 00:32:08.501 lat (usec) : 500=34.49%, 750=13.73%, 1000=1.47% 00:32:08.501 lat (msec) : 2=0.42%, 50=49.90% 00:32:08.501 cpu : usr=97.18%, sys=2.58%, ctx=13, majf=0, minf=38 00:32:08.501 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.501 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.501 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:08.501 00:32:08.501 Run status group 0 (all jobs): 00:32:08.501 READ: bw=1525KiB/s (1562kB/s), 763KiB/s-764KiB/s (781kB/s-783kB/s), io=14.9MiB (15.7MB), run=10008-10027msec 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.760 00:32:08.760 real 0m11.679s 00:32:08.760 user 0m26.566s 00:32:08.760 sys 0m0.956s 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.760 16:43:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.760 ************************************ 00:32:08.760 END TEST fio_dif_1_multi_subsystems 00:32:08.761 ************************************ 00:32:08.761 16:43:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:08.761 16:43:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:08.761 16:43:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.761 16:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:08.761 ************************************ 00:32:08.761 START TEST fio_dif_rand_params 00:32:08.761 ************************************ 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:08.761 bdev_null0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:08.761 [2024-11-04 16:43:35.552734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:08.761 { 00:32:08.761 "params": { 00:32:08.761 "name": "Nvme$subsystem", 00:32:08.761 "trtype": "$TEST_TRANSPORT", 00:32:08.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.761 "adrfam": "ipv4", 00:32:08.761 "trsvcid": "$NVMF_PORT", 00:32:08.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.761 "hdgst": ${hdgst:-false}, 00:32:08.761 "ddgst": ${ddgst:-false} 00:32:08.761 }, 00:32:08.761 "method": "bdev_nvme_attach_controller" 00:32:08.761 } 00:32:08.761 EOF 00:32:08.761 )") 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:08.761 16:43:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:08.761 "params": { 00:32:08.761 "name": "Nvme0", 00:32:08.761 "trtype": "tcp", 00:32:08.761 "traddr": "10.0.0.2", 00:32:08.761 "adrfam": "ipv4", 00:32:08.761 "trsvcid": "4420", 00:32:08.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.761 "hdgst": false, 00:32:08.761 "ddgst": false 00:32:08.761 }, 00:32:08.761 "method": "bdev_nvme_attach_controller" 00:32:08.761 }' 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:09.042 16:43:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.304 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:09.304 ... 00:32:09.304 fio-3.35 00:32:09.304 Starting 3 threads 00:32:15.856 00:32:15.856 filename0: (groupid=0, jobs=1): err= 0: pid=3068594: Mon Nov 4 16:43:41 2024 00:32:15.856 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(200MiB/5044msec) 00:32:15.856 slat (nsec): min=5995, max=33006, avg=10474.74, stdev=2192.53 00:32:15.856 clat (usec): min=3501, max=90169, avg=9412.01, stdev=7541.30 00:32:15.856 lat (usec): min=3509, max=90180, avg=9422.49, stdev=7541.28 00:32:15.856 clat percentiles (usec): 00:32:15.856 | 1.00th=[ 3785], 5.00th=[ 4883], 10.00th=[ 5932], 20.00th=[ 6587], 00:32:15.856 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:32:15.856 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:32:15.856 | 99.00th=[49021], 99.50th=[50070], 99.90th=[89654], 99.95th=[89654], 00:32:15.856 | 99.99th=[89654] 00:32:15.856 bw ( KiB/s): min=25600, max=49920, per=34.19%, avg=40934.40, stdev=7414.14, samples=10 00:32:15.856 iops : min= 200, max= 390, avg=319.80, stdev=57.92, samples=10 00:32:15.856 lat (msec) : 4=2.37%, 10=78.83%, 20=16.11%, 50=2.12%, 100=0.56% 00:32:15.856 cpu : usr=93.77%, sys=5.95%, ctx=8, majf=0, minf=2 00:32:15.856 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.856 issued rwts: total=1601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:15.856 filename0: (groupid=0, jobs=1): err= 0: pid=3068595: Mon Nov 4 16:43:41 2024 00:32:15.857 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(191MiB/5045msec) 00:32:15.857 slat (nsec): min=6038, max=32292, avg=10598.22, stdev=2213.28 00:32:15.857 clat (usec): min=3202, max=89354, avg=9875.62, stdev=8449.77 00:32:15.857 lat (usec): min=3209, max=89365, avg=9886.21, stdev=8449.66 00:32:15.857 clat percentiles (usec): 00:32:15.857 | 1.00th=[ 5014], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6980], 00:32:15.857 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 8848], 00:32:15.857 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11600], 00:32:15.857 | 99.00th=[49546], 99.50th=[50594], 99.90th=[88605], 99.95th=[89654], 00:32:15.857 | 99.99th=[89654] 00:32:15.857 bw ( KiB/s): min=28928, max=44800, per=32.59%, avg=39014.40, stdev=5051.56, samples=10 00:32:15.857 iops : min= 226, max= 350, avg=304.80, stdev=39.47, samples=10 00:32:15.857 lat (msec) : 4=0.66%, 10=82.70%, 20=13.11%, 50=2.82%, 100=0.72% 00:32:15.857 cpu : usr=94.27%, sys=5.43%, ctx=12, majf=0, minf=9 00:32:15.857 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.857 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.857 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:15.857 filename0: (groupid=0, jobs=1): err= 0: pid=3068596: Mon Nov 4 16:43:41 2024 00:32:15.857 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(199MiB/5004msec) 00:32:15.857 slat (nsec): min=6033, max=25016, avg=10788.29, stdev=2081.54 00:32:15.857 clat (usec): min=3438, max=51195, avg=9415.38, stdev=5955.51 00:32:15.857 lat (usec): min=3448, max=51206, avg=9426.17, stdev=5955.55 00:32:15.857 clat percentiles (usec): 00:32:15.857 | 1.00th=[ 3687], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6652], 00:32:15.857 | 30.00th=[ 7504], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:32:15.857 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:32:15.857 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50070], 99.95th=[51119], 00:32:15.857 | 99.99th=[51119] 00:32:15.857 bw ( KiB/s): min=28416, max=50432, per=34.00%, avg=40704.00, stdev=5729.42, samples=10 00:32:15.857 iops : min= 222, max= 394, avg=318.00, stdev=44.76, samples=10 00:32:15.857 lat (msec) : 4=1.76%, 10=70.54%, 20=25.63%, 50=1.95%, 100=0.13% 00:32:15.857 cpu : usr=94.12%, sys=5.58%, ctx=10, majf=0, minf=9 00:32:15.857 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.857 issued rwts: total=1592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.857 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:15.857 00:32:15.857 Run status group 0 (all jobs): 00:32:15.857 READ: bw=117MiB/s (123MB/s), 37.8MiB/s-39.8MiB/s (39.6MB/s-41.7MB/s), io=590MiB (619MB), run=5004-5045msec 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 bdev_null0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 [2024-11-04 16:43:41.842312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 bdev_null1 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 bdev_null2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:15.857 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.858 { 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme$subsystem", 00:32:15.858 "trtype": "$TEST_TRANSPORT", 00:32:15.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "$NVMF_PORT", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.858 "hdgst": ${hdgst:-false}, 00:32:15.858 "ddgst": ${ddgst:-false} 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 } 00:32:15.858 EOF 00:32:15.858 )") 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.858 { 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme$subsystem", 00:32:15.858 "trtype": "$TEST_TRANSPORT", 00:32:15.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "$NVMF_PORT", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.858 "hdgst": ${hdgst:-false}, 00:32:15.858 "ddgst": ${ddgst:-false} 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 } 00:32:15.858 EOF 00:32:15.858 )") 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.858 { 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme$subsystem", 00:32:15.858 "trtype": "$TEST_TRANSPORT", 00:32:15.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "$NVMF_PORT", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.858 "hdgst": ${hdgst:-false}, 00:32:15.858 "ddgst": ${ddgst:-false} 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 } 00:32:15.858 EOF 00:32:15.858 )") 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme0", 00:32:15.858 "trtype": "tcp", 00:32:15.858 "traddr": "10.0.0.2", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "4420", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.858 "hdgst": false, 00:32:15.858 "ddgst": false 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 },{ 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme1", 00:32:15.858 "trtype": "tcp", 00:32:15.858 "traddr": "10.0.0.2", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "4420", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.858 "hdgst": false, 00:32:15.858 "ddgst": false 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 },{ 00:32:15.858 "params": { 00:32:15.858 "name": "Nvme2", 00:32:15.858 "trtype": "tcp", 00:32:15.858 "traddr": "10.0.0.2", 00:32:15.858 "adrfam": "ipv4", 00:32:15.858 "trsvcid": "4420", 00:32:15.858 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:15.858 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:15.858 "hdgst": false, 00:32:15.858 "ddgst": false 00:32:15.858 }, 00:32:15.858 "method": "bdev_nvme_attach_controller" 00:32:15.858 }' 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:15.858 16:43:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.858 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:15.858 ... 00:32:15.858 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:15.858 ... 00:32:15.858 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:15.858 ... 00:32:15.858 fio-3.35 00:32:15.858 Starting 24 threads 00:32:28.054 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069758: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10001msec) 00:32:28.054 slat (nsec): min=6454, max=87645, avg=38712.52, stdev=17983.57 00:32:28.054 clat (usec): min=9782, max=51746, avg=26223.21, stdev=2190.87 00:32:28.054 lat (usec): min=9795, max=51781, avg=26261.92, stdev=2192.71 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.054 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.054 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:32:28.054 | 99.00th=[30016], 99.50th=[30278], 99.90th=[42730], 99.95th=[42730], 00:32:28.054 | 99.99th=[51643] 00:32:28.054 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.79, stdev=117.40, samples=19 00:32:28.054 iops : min= 544, max= 640, avg=601.16, stdev=29.34, samples=19 00:32:28.054 lat (msec) : 10=0.12%, 20=0.45%, 50=99.40%, 100=0.03% 00:32:28.054 cpu : usr=98.73%, sys=0.86%, ctx=16, majf=0, minf=22 00:32:28.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069759: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.5MiB/10003msec) 00:32:28.054 slat (usec): min=5, max=104, avg=45.42, stdev=18.97 00:32:28.054 clat (usec): min=2562, max=57500, avg=26120.30, stdev=2552.80 00:32:28.054 lat (usec): min=2568, max=57515, avg=26165.72, stdev=2555.84 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[18482], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.054 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.054 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29492], 00:32:28.054 | 99.00th=[30016], 99.50th=[30278], 99.90th=[43254], 99.95th=[43254], 00:32:28.054 | 99.99th=[57410] 00:32:28.054 bw ( KiB/s): min= 2299, max= 2688, per=4.16%, avg=2403.11, stdev=123.42, samples=19 00:32:28.054 iops : min= 574, max= 672, avg=600.68, stdev=30.87, samples=19 00:32:28.054 lat (msec) : 4=0.27%, 20=0.90%, 50=98.81%, 100=0.03% 00:32:28.054 cpu : usr=98.43%, sys=0.99%, ctx=62, majf=0, minf=20 00:32:28.054 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 issued rwts: total=6028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069760: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.6MiB/10011msec) 00:32:28.054 slat (nsec): min=7212, max=68350, avg=13200.28, stdev=8028.54 00:32:28.054 clat (usec): min=13604, max=35537, avg=26442.94, stdev=2083.84 00:32:28.054 lat (usec): min=13612, max=35564, avg=26456.14, stdev=2084.31 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:32:28.054 | 30.00th=[25035], 40.00th=[26346], 50.00th=[26346], 60.00th=[26346], 00:32:28.054 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29230], 95.00th=[30016], 00:32:28.054 | 99.00th=[30278], 99.50th=[30278], 99.90th=[35390], 99.95th=[35390], 00:32:28.054 | 99.99th=[35390] 00:32:28.054 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2406.40, stdev=153.15, samples=20 00:32:28.054 iops : min= 544, max= 672, avg=601.60, stdev=38.29, samples=20 00:32:28.054 lat (msec) : 20=0.80%, 50=99.20% 00:32:28.054 cpu : usr=98.37%, sys=1.13%, ctx=64, majf=0, minf=31 00:32:28.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069762: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10005msec) 00:32:28.054 slat (nsec): min=6738, max=82343, avg=31970.98, stdev=17102.08 00:32:28.054 clat (usec): min=14829, max=46540, avg=26296.83, stdev=1985.26 00:32:28.054 lat (usec): min=14842, max=46554, avg=26328.81, stdev=1987.03 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.054 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.054 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[29754], 00:32:28.054 | 99.00th=[30016], 99.50th=[30278], 99.90th=[38536], 99.95th=[38536], 00:32:28.054 | 99.99th=[46400] 00:32:28.054 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.79, stdev=117.71, samples=19 00:32:28.054 iops : min= 544, max= 640, avg=601.16, stdev=29.46, samples=19 00:32:28.054 lat (msec) : 20=0.30%, 50=99.70% 00:32:28.054 cpu : usr=98.75%, sys=0.88%, ctx=11, majf=0, minf=20 00:32:28.054 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069763: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10014msec) 00:32:28.054 slat (nsec): min=6900, max=68341, avg=27749.13, stdev=12662.93 00:32:28.054 clat (usec): min=14965, max=33109, avg=26348.65, stdev=1946.86 00:32:28.054 lat (usec): min=14985, max=33131, avg=26376.39, stdev=1947.59 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.054 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:32:28.054 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[30016], 00:32:28.054 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[33162], 00:32:28.054 | 99.99th=[33162] 00:32:28.054 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2411.79, stdev=130.41, samples=19 00:32:28.054 iops : min= 544, max= 672, avg=602.89, stdev=32.62, samples=19 00:32:28.054 lat (msec) : 20=0.60%, 50=99.40% 00:32:28.054 cpu : usr=98.47%, sys=1.02%, ctx=85, majf=0, minf=19 00:32:28.054 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.054 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.054 filename0: (groupid=0, jobs=1): err= 0: pid=3069764: Mon Nov 4 16:43:53 2024 00:32:28.054 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.054 slat (usec): min=7, max=108, avg=47.42, stdev=18.06 00:32:28.054 clat (usec): min=11707, max=30449, avg=26092.46, stdev=2017.44 00:32:28.054 lat (usec): min=11729, max=30464, avg=26139.88, stdev=2021.26 00:32:28.054 clat percentiles (usec): 00:32:28.054 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:32:28.054 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29492], 00:32:28.055 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:32:28.055 | 99.99th=[30540] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.055 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.055 cpu : usr=98.57%, sys=0.95%, ctx=47, majf=0, minf=31 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename0: (groupid=0, jobs=1): err= 0: pid=3069765: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.055 slat (usec): min=6, max=103, avg=37.22, stdev=20.70 00:32:28.055 clat (usec): min=9464, max=30492, avg=26240.04, stdev=2015.06 00:32:28.055 lat (usec): min=9487, max=30522, avg=26277.26, stdev=2019.38 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:32:28.055 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29754], 00:32:28.055 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:32:28.055 | 99.99th=[30540] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.055 lat (msec) : 10=0.03%, 20=0.50%, 50=99.47% 00:32:28.055 cpu : usr=97.89%, sys=1.24%, ctx=226, majf=0, minf=32 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename0: (groupid=0, jobs=1): err= 0: pid=3069766: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10002msec) 00:32:28.055 slat (usec): min=4, max=102, avg=45.98, stdev=17.70 00:32:28.055 clat (usec): min=13931, max=38526, avg=26178.61, stdev=2006.44 00:32:28.055 lat (usec): min=13941, max=38539, avg=26224.60, stdev=2008.47 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.055 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29492], 00:32:28.055 | 99.00th=[30016], 99.50th=[30278], 99.90th=[38536], 99.95th=[38536], 00:32:28.055 | 99.99th=[38536] 00:32:28.055 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2404.79, stdev=118.01, samples=19 00:32:28.055 iops : min= 542, max= 640, avg=601.16, stdev=29.58, samples=19 00:32:28.055 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.055 cpu : usr=98.35%, sys=1.09%, ctx=87, majf=0, minf=22 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069767: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=603, BW=2413KiB/s (2471kB/s)(23.6MiB/10008msec) 00:32:28.055 slat (nsec): min=4213, max=87341, avg=34248.94, stdev=17738.22 00:32:28.055 clat (usec): min=9790, max=36665, avg=26184.84, stdev=2150.65 00:32:28.055 lat (usec): min=9804, max=36713, avg=26219.09, stdev=2153.46 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[19268], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.055 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:32:28.055 | 99.00th=[30278], 99.50th=[33424], 99.90th=[36439], 99.95th=[36439], 00:32:28.055 | 99.99th=[36439] 00:32:28.055 bw ( KiB/s): min= 2304, max= 2688, per=4.18%, avg=2414.05, stdev=120.75, samples=19 00:32:28.055 iops : min= 576, max= 672, avg=603.47, stdev=30.18, samples=19 00:32:28.055 lat (msec) : 10=0.08%, 20=0.96%, 50=98.96% 00:32:28.055 cpu : usr=98.84%, sys=0.75%, ctx=31, majf=0, minf=30 00:32:28.055 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069768: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10014msec) 00:32:28.055 slat (nsec): min=6914, max=68025, avg=26040.13, stdev=12525.94 00:32:28.055 clat (usec): min=14971, max=30353, avg=26362.48, stdev=1923.29 00:32:28.055 lat (usec): min=14991, max=30373, avg=26388.52, stdev=1924.28 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.055 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[30016], 00:32:28.055 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:32:28.055 | 99.99th=[30278] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2411.53, stdev=130.09, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=602.84, stdev=32.56, samples=19 00:32:28.055 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.055 cpu : usr=98.04%, sys=1.22%, ctx=106, majf=0, minf=27 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069769: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.055 slat (usec): min=7, max=108, avg=48.49, stdev=17.87 00:32:28.055 clat (usec): min=9426, max=30442, avg=26104.19, stdev=2024.52 00:32:28.055 lat (usec): min=9434, max=30491, avg=26152.68, stdev=2028.96 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:32:28.055 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29492], 00:32:28.055 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:32:28.055 | 99.99th=[30540] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.055 lat (msec) : 10=0.03%, 20=0.50%, 50=99.47% 00:32:28.055 cpu : usr=98.92%, sys=0.69%, ctx=38, majf=0, minf=26 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069770: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.055 slat (usec): min=7, max=107, avg=48.52, stdev=17.62 00:32:28.055 clat (usec): min=11680, max=30434, avg=26091.21, stdev=2014.88 00:32:28.055 lat (usec): min=11694, max=30451, avg=26139.73, stdev=2019.17 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:32:28.055 | 30.00th=[24773], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29492], 00:32:28.055 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:32:28.055 | 99.99th=[30540] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.055 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.055 cpu : usr=98.90%, sys=0.68%, ctx=68, majf=0, minf=23 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069772: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10004msec) 00:32:28.055 slat (nsec): min=4659, max=82391, avg=31758.86, stdev=17105.77 00:32:28.055 clat (usec): min=14831, max=45115, avg=26291.79, stdev=1964.62 00:32:28.055 lat (usec): min=14844, max=45129, avg=26323.55, stdev=1966.29 00:32:28.055 clat percentiles (usec): 00:32:28.055 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.055 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.055 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[29754], 00:32:28.055 | 99.00th=[30016], 99.50th=[30278], 99.90th=[37487], 99.95th=[37487], 00:32:28.055 | 99.99th=[45351] 00:32:28.055 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2405.05, stdev=132.05, samples=19 00:32:28.055 iops : min= 544, max= 672, avg=601.26, stdev=33.01, samples=19 00:32:28.055 lat (msec) : 20=0.30%, 50=99.70% 00:32:28.055 cpu : usr=98.54%, sys=0.91%, ctx=57, majf=0, minf=28 00:32:28.055 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.055 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.055 filename1: (groupid=0, jobs=1): err= 0: pid=3069773: Mon Nov 4 16:43:53 2024 00:32:28.055 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.055 slat (nsec): min=7458, max=98338, avg=33002.13, stdev=16728.54 00:32:28.056 clat (usec): min=9448, max=30547, avg=26276.48, stdev=2040.14 00:32:28.056 lat (usec): min=9471, max=30575, avg=26309.48, stdev=2041.20 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:32:28.056 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[29754], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:32:28.056 | 99.99th=[30540] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.056 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.056 lat (msec) : 10=0.03%, 20=0.50%, 50=99.47% 00:32:28.056 cpu : usr=98.04%, sys=1.26%, ctx=127, majf=0, minf=25 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename1: (groupid=0, jobs=1): err= 0: pid=3069774: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10001msec) 00:32:28.056 slat (nsec): min=5968, max=99226, avg=44378.20, stdev=14096.45 00:32:28.056 clat (usec): min=17599, max=38295, avg=26219.91, stdev=1836.63 00:32:28.056 lat (usec): min=17608, max=38313, avg=26264.29, stdev=1837.80 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.056 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29754], 00:32:28.056 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:32:28.056 | 99.99th=[38536] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2411.53, stdev=136.91, samples=19 00:32:28.056 iops : min= 544, max= 672, avg=602.84, stdev=34.26, samples=19 00:32:28.056 lat (msec) : 20=0.33%, 50=99.67% 00:32:28.056 cpu : usr=98.65%, sys=0.89%, ctx=95, majf=0, minf=22 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename1: (groupid=0, jobs=1): err= 0: pid=3069775: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10001msec) 00:32:28.056 slat (nsec): min=7405, max=74733, avg=36709.52, stdev=14325.83 00:32:28.056 clat (usec): min=9838, max=42257, avg=26291.20, stdev=2165.20 00:32:28.056 lat (usec): min=9891, max=42295, avg=26327.91, stdev=2165.08 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.056 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[42206], 99.95th=[42206], 00:32:28.056 | 99.99th=[42206] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.79, stdev=117.40, samples=19 00:32:28.056 iops : min= 544, max= 640, avg=601.16, stdev=29.34, samples=19 00:32:28.056 lat (msec) : 10=0.07%, 20=0.47%, 50=99.47% 00:32:28.056 cpu : usr=98.54%, sys=1.10%, ctx=13, majf=0, minf=25 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename2: (groupid=0, jobs=1): err= 0: pid=3069776: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10002msec) 00:32:28.056 slat (nsec): min=7618, max=78599, avg=30952.17, stdev=14942.12 00:32:28.056 clat (usec): min=10453, max=43381, avg=26365.91, stdev=2175.50 00:32:28.056 lat (usec): min=10497, max=43397, avg=26396.86, stdev=2176.19 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.056 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[30016], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[43254], 99.95th=[43254], 00:32:28.056 | 99.99th=[43254] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2565, per=4.16%, avg=2404.79, stdev=118.02, samples=19 00:32:28.056 iops : min= 544, max= 641, avg=601.11, stdev=29.52, samples=19 00:32:28.056 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.056 cpu : usr=98.68%, sys=0.92%, ctx=70, majf=0, minf=20 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename2: (groupid=0, jobs=1): err= 0: pid=3069777: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:28.056 slat (nsec): min=7616, max=99155, avg=37622.71, stdev=15987.73 00:32:28.056 clat (usec): min=11139, max=32550, avg=26234.42, stdev=2033.56 00:32:28.056 lat (usec): min=11152, max=32609, avg=26272.05, stdev=2035.58 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22676], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:32:28.056 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[29754], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:32:28.056 | 99.99th=[32637] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=140.83, samples=19 00:32:28.056 iops : min= 544, max= 672, avg=604.63, stdev=35.21, samples=19 00:32:28.056 lat (msec) : 20=0.53%, 50=99.47% 00:32:28.056 cpu : usr=97.99%, sys=1.26%, ctx=91, majf=0, minf=22 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename2: (groupid=0, jobs=1): err= 0: pid=3069779: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=600, BW=2403KiB/s (2461kB/s)(23.5MiB/10013msec) 00:32:28.056 slat (nsec): min=3579, max=99099, avg=42761.39, stdev=14813.46 00:32:28.056 clat (usec): min=18343, max=39396, avg=26282.60, stdev=1938.02 00:32:28.056 lat (usec): min=18380, max=39408, avg=26325.36, stdev=1938.73 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.056 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[28181], 90.00th=[28967], 95.00th=[29754], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[39584], 99.95th=[39584], 00:32:28.056 | 99.99th=[39584] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2405.05, stdev=117.46, samples=19 00:32:28.056 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:32:28.056 lat (msec) : 20=0.27%, 50=99.73% 00:32:28.056 cpu : usr=98.74%, sys=0.86%, ctx=44, majf=0, minf=22 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename2: (groupid=0, jobs=1): err= 0: pid=3069780: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10005msec) 00:32:28.056 slat (nsec): min=7810, max=68225, avg=29782.34, stdev=12866.30 00:32:28.056 clat (usec): min=14918, max=38812, avg=26359.75, stdev=1979.33 00:32:28.056 lat (usec): min=14942, max=38825, avg=26389.53, stdev=1979.76 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.056 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29230], 95.00th=[29754], 00:32:28.056 | 99.00th=[30278], 99.50th=[30278], 99.90th=[38536], 99.95th=[38536], 00:32:28.056 | 99.99th=[39060] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.79, stdev=117.71, samples=19 00:32:28.056 iops : min= 544, max= 640, avg=601.16, stdev=29.46, samples=19 00:32:28.056 lat (msec) : 20=0.27%, 50=99.73% 00:32:28.056 cpu : usr=98.58%, sys=0.92%, ctx=53, majf=0, minf=25 00:32:28.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.056 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.056 filename2: (groupid=0, jobs=1): err= 0: pid=3069781: Mon Nov 4 16:43:53 2024 00:32:28.056 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.5MiB/10002msec) 00:32:28.056 slat (nsec): min=5412, max=89574, avg=36352.52, stdev=18529.14 00:32:28.056 clat (usec): min=9532, max=52813, avg=26265.32, stdev=2217.96 00:32:28.056 lat (usec): min=9570, max=52829, avg=26301.67, stdev=2220.00 00:32:28.056 clat percentiles (usec): 00:32:28.056 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.056 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.056 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:32:28.056 | 99.00th=[30016], 99.50th=[30278], 99.90th=[43779], 99.95th=[43779], 00:32:28.056 | 99.99th=[52691] 00:32:28.056 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=117.65, samples=19 00:32:28.056 iops : min= 544, max= 640, avg=601.05, stdev=29.44, samples=19 00:32:28.056 lat (msec) : 10=0.15%, 20=0.45%, 50=99.37%, 100=0.03% 00:32:28.057 cpu : usr=98.69%, sys=0.91%, ctx=43, majf=0, minf=22 00:32:28.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.057 filename2: (groupid=0, jobs=1): err= 0: pid=3069782: Mon Nov 4 16:43:53 2024 00:32:28.057 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10003msec) 00:32:28.057 slat (nsec): min=5703, max=74840, avg=36627.80, stdev=14568.03 00:32:28.057 clat (usec): min=9790, max=44037, avg=26294.37, stdev=2196.03 00:32:28.057 lat (usec): min=9838, max=44053, avg=26331.00, stdev=2196.22 00:32:28.057 clat percentiles (usec): 00:32:28.057 | 1.00th=[22938], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:32:28.057 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:32:28.057 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:32:28.057 | 99.00th=[30278], 99.50th=[30278], 99.90th=[43779], 99.95th=[43779], 00:32:28.057 | 99.99th=[43779] 00:32:28.057 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2404.53, stdev=117.65, samples=19 00:32:28.057 iops : min= 544, max= 640, avg=601.05, stdev=29.44, samples=19 00:32:28.057 lat (msec) : 10=0.08%, 20=0.45%, 50=99.47% 00:32:28.057 cpu : usr=98.78%, sys=0.85%, ctx=21, majf=0, minf=20 00:32:28.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.057 filename2: (groupid=0, jobs=1): err= 0: pid=3069783: Mon Nov 4 16:43:53 2024 00:32:28.057 read: IOPS=602, BW=2411KiB/s (2469kB/s)(23.6MiB/10008msec) 00:32:28.057 slat (usec): min=7, max=239, avg=18.18, stdev=12.08 00:32:28.057 clat (usec): min=9569, max=30658, avg=26391.05, stdev=2029.34 00:32:28.057 lat (usec): min=9579, max=30674, avg=26409.23, stdev=2024.46 00:32:28.057 clat percentiles (usec): 00:32:28.057 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.057 | 30.00th=[25035], 40.00th=[26346], 50.00th=[26346], 60.00th=[26346], 00:32:28.057 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:32:28.057 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:32:28.057 | 99.99th=[30540] 00:32:28.057 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=147.15, samples=19 00:32:28.057 iops : min= 544, max= 672, avg=604.63, stdev=36.79, samples=19 00:32:28.057 lat (msec) : 10=0.10%, 20=0.43%, 50=99.47% 00:32:28.057 cpu : usr=98.69%, sys=0.90%, ctx=32, majf=0, minf=24 00:32:28.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.057 filename2: (groupid=0, jobs=1): err= 0: pid=3069784: Mon Nov 4 16:43:53 2024 00:32:28.057 read: IOPS=602, BW=2411KiB/s (2469kB/s)(23.6MiB/10008msec) 00:32:28.057 slat (usec): min=7, max=250, avg=21.84, stdev=14.78 00:32:28.057 clat (usec): min=9793, max=30701, avg=26374.61, stdev=2013.48 00:32:28.057 lat (usec): min=9916, max=30726, avg=26396.45, stdev=2009.91 00:32:28.057 clat percentiles (usec): 00:32:28.057 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:28.057 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:32:28.057 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:32:28.057 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:32:28.057 | 99.99th=[30802] 00:32:28.057 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2418.53, stdev=153.21, samples=19 00:32:28.057 iops : min= 544, max= 672, avg=604.63, stdev=38.30, samples=19 00:32:28.057 lat (msec) : 10=0.03%, 20=0.50%, 50=99.47% 00:32:28.057 cpu : usr=98.02%, sys=1.27%, ctx=120, majf=0, minf=28 00:32:28.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:28.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.057 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:28.057 00:32:28.057 Run status group 0 (all jobs): 00:32:28.057 READ: bw=56.4MiB/s (59.1MB/s), 2403KiB/s-2413KiB/s (2461kB/s-2471kB/s), io=565MiB (592MB), run=10001-10014msec 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 bdev_null0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.057 [2024-11-04 16:43:53.689901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:28.057 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.058 bdev_null1 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.058 { 00:32:28.058 "params": { 00:32:28.058 "name": "Nvme$subsystem", 00:32:28.058 "trtype": "$TEST_TRANSPORT", 00:32:28.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.058 "adrfam": "ipv4", 00:32:28.058 "trsvcid": "$NVMF_PORT", 00:32:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.058 "hdgst": ${hdgst:-false}, 00:32:28.058 "ddgst": ${ddgst:-false} 00:32:28.058 }, 00:32:28.058 "method": "bdev_nvme_attach_controller" 00:32:28.058 } 00:32:28.058 EOF 00:32:28.058 )") 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.058 { 00:32:28.058 "params": { 00:32:28.058 "name": "Nvme$subsystem", 00:32:28.058 "trtype": "$TEST_TRANSPORT", 00:32:28.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.058 "adrfam": "ipv4", 00:32:28.058 "trsvcid": "$NVMF_PORT", 00:32:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.058 "hdgst": ${hdgst:-false}, 00:32:28.058 "ddgst": ${ddgst:-false} 00:32:28.058 }, 00:32:28.058 "method": "bdev_nvme_attach_controller" 00:32:28.058 } 00:32:28.058 EOF 00:32:28.058 )") 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:28.058 "params": { 00:32:28.058 "name": "Nvme0", 00:32:28.058 "trtype": "tcp", 00:32:28.058 "traddr": "10.0.0.2", 00:32:28.058 "adrfam": "ipv4", 00:32:28.058 "trsvcid": "4420", 00:32:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.058 "hdgst": false, 00:32:28.058 "ddgst": false 00:32:28.058 }, 00:32:28.058 "method": "bdev_nvme_attach_controller" 00:32:28.058 },{ 00:32:28.058 "params": { 00:32:28.058 "name": "Nvme1", 00:32:28.058 "trtype": "tcp", 00:32:28.058 "traddr": "10.0.0.2", 00:32:28.058 "adrfam": "ipv4", 00:32:28.058 "trsvcid": "4420", 00:32:28.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:28.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:28.058 "hdgst": false, 00:32:28.058 "ddgst": false 00:32:28.058 }, 00:32:28.058 "method": "bdev_nvme_attach_controller" 00:32:28.058 }' 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.058 16:43:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.058 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:28.058 ... 00:32:28.058 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:28.058 ... 00:32:28.058 fio-3.35 00:32:28.058 Starting 4 threads 00:32:33.322 00:32:33.322 filename0: (groupid=0, jobs=1): err= 0: pid=3071714: Mon Nov 4 16:43:59 2024 00:32:33.322 read: IOPS=2738, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 00:32:33.322 slat (nsec): min=5858, max=65139, avg=12409.30, stdev=9202.43 00:32:33.322 clat (usec): min=794, max=5585, avg=2882.81, stdev=406.07 00:32:33.322 lat (usec): min=844, max=5591, avg=2895.22, stdev=406.44 00:32:33.322 clat percentiles (usec): 00:32:33.322 | 1.00th=[ 1811], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:32:33.322 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:32:33.322 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3458], 00:32:33.322 | 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[ 4948], 99.95th=[ 5014], 00:32:33.322 | 99.99th=[ 5604] 00:32:33.322 bw ( KiB/s): min=21152, max=22816, per=25.82%, avg=21852.44, stdev=472.12, samples=9 00:32:33.322 iops : min= 2644, max= 2852, avg=2731.56, stdev=59.02, samples=9 00:32:33.322 lat (usec) : 1000=0.05% 00:32:33.322 lat (msec) : 2=1.67%, 4=96.46%, 10=1.82% 00:32:33.322 cpu : usr=96.00%, sys=3.70%, ctx=7, majf=0, minf=9 00:32:33.322 IO depths : 1=0.4%, 2=4.8%, 4=67.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:33.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.322 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.322 issued rwts: total=13697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:33.322 filename0: (groupid=0, jobs=1): err= 0: pid=3071715: Mon Nov 4 16:43:59 2024 00:32:33.322 read: IOPS=2609, BW=20.4MiB/s (21.4MB/s)(103MiB/5041msec) 00:32:33.322 slat (nsec): min=5852, max=80694, avg=13498.15, stdev=10257.52 00:32:33.322 clat (usec): min=588, max=41481, avg=3007.97, stdev=687.81 00:32:33.322 lat (usec): min=617, max=41494, avg=3021.46, stdev=687.67 00:32:33.322 clat percentiles (usec): 00:32:33.322 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2835], 00:32:33.322 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:32:33.322 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3589], 00:32:33.322 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 4948], 99.95th=[ 5080], 00:32:33.322 | 99.99th=[41681] 00:32:33.322 bw ( KiB/s): min=20528, max=21536, per=24.86%, avg=21043.20, stdev=310.87, samples=10 00:32:33.322 iops : min= 2566, max= 2692, avg=2630.40, stdev=38.86, samples=10 00:32:33.323 lat (usec) : 750=0.02%, 1000=0.01% 00:32:33.323 lat (msec) : 2=0.81%, 4=96.62%, 10=2.51%, 50=0.02% 00:32:33.323 cpu : usr=97.04%, sys=2.64%, ctx=11, majf=0, minf=9 00:32:33.323 IO depths : 1=0.3%, 2=4.9%, 4=67.7%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:33.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 issued rwts: total=13155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:33.323 filename1: (groupid=0, jobs=1): err= 0: pid=3071716: Mon Nov 4 16:43:59 2024 00:32:33.323 read: IOPS=2693, BW=21.0MiB/s (22.1MB/s)(105MiB/5004msec) 00:32:33.323 slat (nsec): min=5954, max=61179, avg=14722.59, stdev=8192.10 00:32:33.323 clat (usec): min=681, max=5240, avg=2928.35, stdev=398.15 00:32:33.323 lat (usec): min=690, max=5246, avg=2943.07, stdev=398.55 00:32:33.323 clat percentiles (usec): 00:32:33.323 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2474], 20.00th=[ 2671], 00:32:33.323 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:32:33.323 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3556], 00:32:33.323 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 5080], 00:32:33.323 | 99.99th=[ 5211] 00:32:33.323 bw ( KiB/s): min=21008, max=22256, per=25.46%, avg=21555.56, stdev=469.81, samples=9 00:32:33.323 iops : min= 2626, max= 2782, avg=2694.44, stdev=58.73, samples=9 00:32:33.323 lat (usec) : 750=0.01%, 1000=0.03% 00:32:33.323 lat (msec) : 2=1.38%, 4=96.51%, 10=2.07% 00:32:33.323 cpu : usr=95.10%, sys=3.60%, ctx=216, majf=0, minf=9 00:32:33.323 IO depths : 1=0.4%, 2=6.1%, 4=62.9%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:33.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 issued rwts: total=13480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:33.323 filename1: (groupid=0, jobs=1): err= 0: pid=3071718: Mon Nov 4 16:43:59 2024 00:32:33.323 read: IOPS=2600, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:32:33.323 slat (nsec): min=5855, max=64217, avg=13616.87, stdev=10287.09 00:32:33.323 clat (usec): min=568, max=6059, avg=3033.64, stdev=398.53 00:32:33.323 lat (usec): min=579, max=6085, avg=3047.25, stdev=398.25 00:32:33.323 clat percentiles (usec): 00:32:33.323 | 1.00th=[ 2114], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2868], 00:32:33.323 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:32:33.323 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3654], 00:32:33.323 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5538], 99.95th=[ 5669], 00:32:33.323 | 99.99th=[ 5735] 00:32:33.323 bw ( KiB/s): min=20408, max=21296, per=24.62%, avg=20840.00, stdev=287.28, samples=9 00:32:33.323 iops : min= 2551, max= 2662, avg=2605.00, stdev=35.91, samples=9 00:32:33.323 lat (usec) : 750=0.05%, 1000=0.08% 00:32:33.323 lat (msec) : 2=0.68%, 4=95.93%, 10=3.26% 00:32:33.323 cpu : usr=96.60%, sys=3.06%, ctx=9, majf=0, minf=9 00:32:33.323 IO depths : 1=0.5%, 2=3.9%, 4=69.1%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:33.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.323 issued rwts: total=13007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.323 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:33.323 00:32:33.323 Run status group 0 (all jobs): 00:32:33.323 READ: bw=82.7MiB/s (86.7MB/s), 20.3MiB/s-21.4MiB/s (21.3MB/s-22.4MB/s), io=417MiB (437MB), run=5001-5041msec 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 16:43:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.323 00:32:33.323 real 0m24.499s 00:32:33.323 user 4m51.815s 00:32:33.323 sys 0m4.961s 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.323 16:44:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 ************************************ 00:32:33.323 END TEST fio_dif_rand_params 00:32:33.323 ************************************ 00:32:33.323 16:44:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:33.323 16:44:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:33.323 16:44:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.323 16:44:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:33.323 ************************************ 00:32:33.323 START TEST fio_dif_digest 00:32:33.323 ************************************ 00:32:33.323 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.324 bdev_null0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.324 [2024-11-04 16:44:00.124396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:33.324 { 00:32:33.324 "params": { 00:32:33.324 "name": "Nvme$subsystem", 00:32:33.324 "trtype": "$TEST_TRANSPORT", 00:32:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:33.324 "adrfam": "ipv4", 00:32:33.324 "trsvcid": "$NVMF_PORT", 00:32:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:33.324 "hdgst": ${hdgst:-false}, 00:32:33.324 "ddgst": ${ddgst:-false} 00:32:33.324 }, 00:32:33.324 "method": "bdev_nvme_attach_controller" 00:32:33.324 } 00:32:33.324 EOF 00:32:33.324 )") 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:33.324 16:44:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:33.324 "params": { 00:32:33.324 "name": "Nvme0", 00:32:33.324 "trtype": "tcp", 00:32:33.324 "traddr": "10.0.0.2", 00:32:33.324 "adrfam": "ipv4", 00:32:33.324 "trsvcid": "4420", 00:32:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.324 "hdgst": true, 00:32:33.324 "ddgst": true 00:32:33.324 }, 00:32:33.324 "method": "bdev_nvme_attach_controller" 00:32:33.324 }' 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:33.590 16:44:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:33.855 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:33.855 ... 00:32:33.855 fio-3.35 00:32:33.855 Starting 3 threads 00:32:46.048 00:32:46.048 filename0: (groupid=0, jobs=1): err= 0: pid=3072889: Mon Nov 4 16:44:11 2024 00:32:46.048 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(376MiB/10044msec) 00:32:46.048 slat (nsec): min=6243, max=34840, avg=11815.90, stdev=2351.26 00:32:46.048 clat (usec): min=7018, max=50020, avg=9992.90, stdev=1300.38 00:32:46.048 lat (usec): min=7028, max=50033, avg=10004.72, stdev=1300.30 00:32:46.048 clat percentiles (usec): 00:32:46.048 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:32:46.048 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:32:46.048 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11469], 00:32:46.048 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12911], 99.95th=[48497], 00:32:46.048 | 99.99th=[50070] 00:32:46.049 bw ( KiB/s): min=35840, max=39936, per=35.29%, avg=38464.00, stdev=1165.40, samples=20 00:32:46.049 iops : min= 280, max= 312, avg=300.50, stdev= 9.10, samples=20 00:32:46.049 lat (msec) : 10=52.94%, 20=46.99%, 50=0.03%, 100=0.03% 00:32:46.049 cpu : usr=95.40%, sys=4.27%, ctx=21, majf=0, minf=9 00:32:46.049 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 issued rwts: total=3007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.049 filename0: (groupid=0, jobs=1): err= 0: pid=3072890: Mon Nov 4 16:44:11 2024 00:32:46.049 read: IOPS=272, BW=34.1MiB/s (35.8MB/s)(343MiB/10048msec) 00:32:46.049 slat (nsec): min=6233, max=32555, avg=12124.70, stdev=2405.54 00:32:46.049 clat (usec): min=7926, max=49311, avg=10966.65, stdev=1389.39 00:32:46.049 lat (usec): min=7939, max=49323, avg=10978.77, stdev=1389.42 00:32:46.049 clat percentiles (usec): 00:32:46.049 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:32:46.049 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:32:46.049 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12780], 00:32:46.049 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14615], 99.95th=[47973], 00:32:46.049 | 99.99th=[49546] 00:32:46.049 bw ( KiB/s): min=33024, max=36864, per=32.17%, avg=35059.20, stdev=1216.37, samples=20 00:32:46.049 iops : min= 258, max= 288, avg=273.90, stdev= 9.50, samples=20 00:32:46.049 lat (msec) : 10=14.26%, 20=85.66%, 50=0.07% 00:32:46.049 cpu : usr=95.14%, sys=4.54%, ctx=27, majf=0, minf=9 00:32:46.049 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 issued rwts: total=2741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.049 filename0: (groupid=0, jobs=1): err= 0: pid=3072891: Mon Nov 4 16:44:11 2024 00:32:46.049 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10049msec) 00:32:46.049 slat (nsec): min=6218, max=31489, avg=11692.79, stdev=2732.65 00:32:46.049 clat (usec): min=7602, max=52132, avg=10701.76, stdev=1401.00 00:32:46.049 lat (usec): min=7614, max=52142, avg=10713.45, stdev=1400.91 00:32:46.049 clat percentiles (usec): 00:32:46.049 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:32:46.049 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:32:46.049 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12387], 00:32:46.049 | 99.00th=[13173], 99.50th=[13829], 99.90th=[15664], 99.95th=[49021], 00:32:46.049 | 99.99th=[52167] 00:32:46.049 bw ( KiB/s): min=33536, max=37632, per=32.96%, avg=35929.60, stdev=1086.99, samples=20 00:32:46.049 iops : min= 262, max= 294, avg=280.70, stdev= 8.49, samples=20 00:32:46.049 lat (msec) : 10=22.21%, 20=77.71%, 50=0.04%, 100=0.04% 00:32:46.049 cpu : usr=95.69%, sys=3.98%, ctx=19, majf=0, minf=12 00:32:46.049 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.049 issued rwts: total=2809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.049 00:32:46.049 Run status group 0 (all jobs): 00:32:46.049 READ: bw=106MiB/s (112MB/s), 34.1MiB/s-37.4MiB/s (35.8MB/s-39.2MB/s), io=1070MiB (1122MB), run=10044-10049msec 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.049 00:32:46.049 real 0m11.241s 00:32:46.049 user 0m35.139s 00:32:46.049 sys 0m1.558s 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.049 16:44:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.049 ************************************ 00:32:46.049 END TEST fio_dif_digest 00:32:46.049 ************************************ 00:32:46.049 16:44:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:46.049 16:44:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.049 rmmod nvme_tcp 00:32:46.049 rmmod nvme_fabrics 00:32:46.049 rmmod nvme_keyring 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3064290 ']' 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3064290 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3064290 ']' 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3064290 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064290 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064290' 00:32:46.049 killing process with pid 3064290 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3064290 00:32:46.049 16:44:11 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3064290 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:46.049 16:44:11 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:47.945 Waiting for block devices as requested 00:32:47.946 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:47.946 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:47.946 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:47.946 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:47.946 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:47.946 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:48.203 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:48.203 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:48.203 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:48.203 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:48.461 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:48.461 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:48.461 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:48.719 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:48.719 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:48.719 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:48.719 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:48.976 16:44:15 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.977 16:44:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.977 16:44:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:48.977 16:44:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.874 16:44:17 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.874 00:32:50.874 real 1m13.892s 00:32:50.874 user 7m8.970s 00:32:50.874 sys 0m20.027s 00:32:50.874 16:44:17 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.874 16:44:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:50.874 ************************************ 00:32:50.874 END TEST nvmf_dif 00:32:50.874 ************************************ 00:32:51.132 16:44:17 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:51.132 16:44:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.132 16:44:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.132 16:44:17 -- common/autotest_common.sh@10 -- # set +x 00:32:51.132 ************************************ 00:32:51.132 START TEST nvmf_abort_qd_sizes 00:32:51.132 ************************************ 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:51.132 * Looking for test storage... 00:32:51.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.132 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.133 --rc genhtml_branch_coverage=1 00:32:51.133 --rc genhtml_function_coverage=1 00:32:51.133 --rc genhtml_legend=1 00:32:51.133 --rc geninfo_all_blocks=1 00:32:51.133 --rc geninfo_unexecuted_blocks=1 00:32:51.133 00:32:51.133 ' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.133 --rc genhtml_branch_coverage=1 00:32:51.133 --rc genhtml_function_coverage=1 00:32:51.133 --rc genhtml_legend=1 00:32:51.133 --rc geninfo_all_blocks=1 00:32:51.133 --rc geninfo_unexecuted_blocks=1 00:32:51.133 00:32:51.133 ' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.133 --rc genhtml_branch_coverage=1 00:32:51.133 --rc genhtml_function_coverage=1 00:32:51.133 --rc genhtml_legend=1 00:32:51.133 --rc geninfo_all_blocks=1 00:32:51.133 --rc geninfo_unexecuted_blocks=1 00:32:51.133 00:32:51.133 ' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.133 --rc genhtml_branch_coverage=1 00:32:51.133 --rc genhtml_function_coverage=1 00:32:51.133 --rc genhtml_legend=1 00:32:51.133 --rc geninfo_all_blocks=1 00:32:51.133 --rc geninfo_unexecuted_blocks=1 00:32:51.133 00:32:51.133 ' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:51.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.133 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.392 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:56.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:56.658 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:56.658 Found net devices under 0000:86:00.0: cvl_0_0 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:56.658 Found net devices under 0000:86:00.1: cvl_0_1 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.658 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.916 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.916 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:32:56.917 00:32:56.917 --- 10.0.0.2 ping statistics --- 00:32:56.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.917 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:56.917 00:32:56.917 --- 10.0.0.1 ping statistics --- 00:32:56.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.917 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:56.917 16:44:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.201 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:00.201 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:01.136 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3080908 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3080908 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3080908 ']' 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.136 16:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.395 [2024-11-04 16:44:27.994790] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:01.395 [2024-11-04 16:44:27.994833] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.395 [2024-11-04 16:44:28.067145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.395 [2024-11-04 16:44:28.113034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.395 [2024-11-04 16:44:28.113076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.395 [2024-11-04 16:44:28.113083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.395 [2024-11-04 16:44:28.113089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.395 [2024-11-04 16:44:28.113094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.395 [2024-11-04 16:44:28.114511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.395 [2024-11-04 16:44:28.114538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.395 [2024-11-04 16:44:28.114609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.395 [2024-11-04 16:44:28.114610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.395 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.395 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:01.395 16:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:01.395 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.395 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.654 16:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.654 ************************************ 00:33:01.654 START TEST spdk_target_abort 00:33:01.654 ************************************ 00:33:01.654 16:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:01.654 16:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:01.654 16:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:01.654 16:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.654 16:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.940 spdk_targetn1 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.940 [2024-11-04 16:44:31.143299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.940 [2024-11-04 16:44:31.189378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:04.940 16:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:08.226 Initializing NVMe Controllers 00:33:08.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:08.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:08.226 Initialization complete. Launching workers. 00:33:08.226 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16351, failed: 0 00:33:08.226 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1355, failed to submit 14996 00:33:08.226 success 754, unsuccessful 601, failed 0 00:33:08.226 16:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:08.226 16:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:11.512 Initializing NVMe Controllers 00:33:11.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:11.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:11.512 Initialization complete. Launching workers. 00:33:11.512 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8617, failed: 0 00:33:11.512 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7366 00:33:11.512 success 309, unsuccessful 942, failed 0 00:33:11.512 16:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:11.512 16:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:14.802 Initializing NVMe Controllers 00:33:14.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:14.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:14.802 Initialization complete. Launching workers. 00:33:14.802 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38258, failed: 0 00:33:14.802 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2927, failed to submit 35331 00:33:14.802 success 594, unsuccessful 2333, failed 0 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.802 16:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3080908 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3080908 ']' 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3080908 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080908 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080908' 00:33:16.266 killing process with pid 3080908 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3080908 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3080908 00:33:16.266 00:33:16.266 real 0m14.643s 00:33:16.266 user 0m55.879s 00:33:16.266 sys 0m2.643s 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.266 16:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:16.266 ************************************ 00:33:16.266 END TEST spdk_target_abort 00:33:16.266 ************************************ 00:33:16.266 16:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:16.266 16:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:16.266 16:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.266 16:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:16.266 ************************************ 00:33:16.266 START TEST kernel_target_abort 00:33:16.266 ************************************ 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:16.266 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:16.267 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:16.267 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:16.267 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:16.267 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:16.267 16:44:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:18.806 Waiting for block devices as requested 00:33:18.806 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:18.806 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:18.806 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:18.806 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:18.806 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:18.806 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:19.064 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:19.064 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:19.064 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:19.064 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:19.323 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:19.323 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:19.323 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:19.582 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:19.582 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:19.582 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:19.582 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:19.841 No valid GPT data, bailing 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:19.841 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:19.842 00:33:19.842 Discovery Log Number of Records 2, Generation counter 2 00:33:19.842 =====Discovery Log Entry 0====== 00:33:19.842 trtype: tcp 00:33:19.842 adrfam: ipv4 00:33:19.842 subtype: current discovery subsystem 00:33:19.842 treq: not specified, sq flow control disable supported 00:33:19.842 portid: 1 00:33:19.842 trsvcid: 4420 00:33:19.842 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:19.842 traddr: 10.0.0.1 00:33:19.842 eflags: none 00:33:19.842 sectype: none 00:33:19.842 =====Discovery Log Entry 1====== 00:33:19.842 trtype: tcp 00:33:19.842 adrfam: ipv4 00:33:19.842 subtype: nvme subsystem 00:33:19.842 treq: not specified, sq flow control disable supported 00:33:19.842 portid: 1 00:33:19.842 trsvcid: 4420 00:33:19.842 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:19.842 traddr: 10.0.0.1 00:33:19.842 eflags: none 00:33:19.842 sectype: none 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:19.842 16:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:23.129 Initializing NVMe Controllers 00:33:23.129 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:23.129 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:23.129 Initialization complete. Launching workers. 00:33:23.129 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93834, failed: 0 00:33:23.129 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93834, failed to submit 0 00:33:23.129 success 0, unsuccessful 93834, failed 0 00:33:23.129 16:44:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:23.129 16:44:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:26.415 Initializing NVMe Controllers 00:33:26.415 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:26.415 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:26.415 Initialization complete. Launching workers. 00:33:26.415 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146625, failed: 0 00:33:26.415 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37030, failed to submit 109595 00:33:26.415 success 0, unsuccessful 37030, failed 0 00:33:26.415 16:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:26.415 16:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.701 Initializing NVMe Controllers 00:33:29.701 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:29.701 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:29.701 Initialization complete. Launching workers. 00:33:29.701 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139772, failed: 0 00:33:29.701 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34994, failed to submit 104778 00:33:29.701 success 0, unsuccessful 34994, failed 0 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:29.701 16:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:29.701 16:44:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.604 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:31.604 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:31.862 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:33.236 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:33.495 00:33:33.495 real 0m17.121s 00:33:33.495 user 0m8.583s 00:33:33.495 sys 0m4.328s 00:33:33.495 16:45:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.496 16:45:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:33.496 ************************************ 00:33:33.496 END TEST kernel_target_abort 00:33:33.496 ************************************ 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.496 rmmod nvme_tcp 00:33:33.496 rmmod nvme_fabrics 00:33:33.496 rmmod nvme_keyring 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3080908 ']' 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3080908 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3080908 ']' 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3080908 00:33:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3080908) - No such process 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3080908 is not found' 00:33:33.496 Process with pid 3080908 is not found 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:33.496 16:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:36.026 Waiting for block devices as requested 00:33:36.026 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:36.285 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:36.285 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:36.285 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:36.285 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:36.544 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:36.544 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:36.544 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:36.544 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:36.802 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:36.802 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:36.802 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:37.060 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:37.060 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:37.060 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:37.060 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:37.319 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:37.319 16:45:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.854 16:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.854 00:33:39.854 real 0m48.328s 00:33:39.854 user 1m8.689s 00:33:39.854 sys 0m15.201s 00:33:39.854 16:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.854 16:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:39.854 ************************************ 00:33:39.854 END TEST nvmf_abort_qd_sizes 00:33:39.854 ************************************ 00:33:39.854 16:45:06 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:39.854 16:45:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:39.854 16:45:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.854 16:45:06 -- common/autotest_common.sh@10 -- # set +x 00:33:39.854 ************************************ 00:33:39.854 START TEST keyring_file 00:33:39.854 ************************************ 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:39.854 * Looking for test storage... 00:33:39.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:39.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.854 --rc genhtml_branch_coverage=1 00:33:39.854 --rc genhtml_function_coverage=1 00:33:39.854 --rc genhtml_legend=1 00:33:39.854 --rc geninfo_all_blocks=1 00:33:39.854 --rc geninfo_unexecuted_blocks=1 00:33:39.854 00:33:39.854 ' 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:39.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.854 --rc genhtml_branch_coverage=1 00:33:39.854 --rc genhtml_function_coverage=1 00:33:39.854 --rc genhtml_legend=1 00:33:39.854 --rc geninfo_all_blocks=1 00:33:39.854 --rc geninfo_unexecuted_blocks=1 00:33:39.854 00:33:39.854 ' 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:39.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.854 --rc genhtml_branch_coverage=1 00:33:39.854 --rc genhtml_function_coverage=1 00:33:39.854 --rc genhtml_legend=1 00:33:39.854 --rc geninfo_all_blocks=1 00:33:39.854 --rc geninfo_unexecuted_blocks=1 00:33:39.854 00:33:39.854 ' 00:33:39.854 16:45:06 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:39.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.854 --rc genhtml_branch_coverage=1 00:33:39.854 --rc genhtml_function_coverage=1 00:33:39.854 --rc genhtml_legend=1 00:33:39.854 --rc geninfo_all_blocks=1 00:33:39.854 --rc geninfo_unexecuted_blocks=1 00:33:39.854 00:33:39.854 ' 00:33:39.854 16:45:06 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:39.854 16:45:06 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.854 16:45:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.854 16:45:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.854 16:45:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.854 16:45:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.854 16:45:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:39.854 16:45:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.854 16:45:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:39.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qpy5vjGc6V 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qpy5vjGc6V 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qpy5vjGc6V 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Qpy5vjGc6V 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P9DDmiJ2gS 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:39.855 16:45:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P9DDmiJ2gS 00:33:39.855 16:45:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P9DDmiJ2gS 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.P9DDmiJ2gS 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=3089807 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:39.855 16:45:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3089807 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3089807 ']' 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.855 16:45:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:39.855 [2024-11-04 16:45:06.493697] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:39.855 [2024-11-04 16:45:06.493747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089807 ] 00:33:39.855 [2024-11-04 16:45:06.555282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.855 [2024-11-04 16:45:06.597157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:40.114 16:45:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:40.114 [2024-11-04 16:45:06.810586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.114 null0 00:33:40.114 [2024-11-04 16:45:06.842641] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:40.114 [2024-11-04 16:45:06.843018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.114 16:45:06 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:40.114 [2024-11-04 16:45:06.874714] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:40.114 request: 00:33:40.114 { 00:33:40.114 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.114 "secure_channel": false, 00:33:40.114 "listen_address": { 00:33:40.114 "trtype": "tcp", 00:33:40.114 "traddr": "127.0.0.1", 00:33:40.114 "trsvcid": "4420" 00:33:40.114 }, 00:33:40.114 "method": "nvmf_subsystem_add_listener", 00:33:40.114 "req_id": 1 00:33:40.114 } 00:33:40.114 Got JSON-RPC error response 00:33:40.114 response: 00:33:40.114 { 00:33:40.114 "code": -32602, 00:33:40.114 "message": "Invalid parameters" 00:33:40.114 } 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:40.114 16:45:06 keyring_file -- keyring/file.sh@47 -- # bperfpid=3089820 00:33:40.114 16:45:06 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3089820 /var/tmp/bperf.sock 00:33:40.114 16:45:06 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3089820 ']' 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.114 16:45:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.115 16:45:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:40.115 [2024-11-04 16:45:06.931004] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:40.115 [2024-11-04 16:45:06.931044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089820 ] 00:33:40.388 [2024-11-04 16:45:06.993712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.388 [2024-11-04 16:45:07.033628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.388 16:45:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.388 16:45:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:40.388 16:45:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:40.388 16:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:40.648 16:45:07 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P9DDmiJ2gS 00:33:40.648 16:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P9DDmiJ2gS 00:33:40.907 16:45:07 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:40.907 16:45:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.907 16:45:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Qpy5vjGc6V == \/\t\m\p\/\t\m\p\.\Q\p\y\5\v\j\G\c\6\V ]] 00:33:40.907 16:45:07 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:40.907 16:45:07 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.907 16:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.165 16:45:07 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.P9DDmiJ2gS == \/\t\m\p\/\t\m\p\.\P\9\D\D\m\i\J\2\g\S ]] 00:33:41.165 16:45:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:41.165 16:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:41.165 16:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.165 16:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.165 16:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.165 16:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.425 16:45:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:41.425 16:45:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:41.425 16:45:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:41.425 16:45:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.425 16:45:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.425 16:45:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:41.425 16:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.683 16:45:08 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:41.683 16:45:08 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.683 16:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.683 [2024-11-04 16:45:08.431431] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:41.683 nvme0n1 00:33:41.942 16:45:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.942 16:45:08 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:41.942 16:45:08 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:41.942 16:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.201 16:45:08 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:42.202 16:45:08 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.202 Running I/O for 1 seconds... 00:33:43.578 18744.00 IOPS, 73.22 MiB/s 00:33:43.578 Latency(us) 00:33:43.578 [2024-11-04T15:45:10.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.578 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:43.578 nvme0n1 : 1.05 18024.63 70.41 0.00 0.00 6896.01 3120.76 48434.22 00:33:43.578 [2024-11-04T15:45:10.402Z] =================================================================================================================== 00:33:43.578 [2024-11-04T15:45:10.402Z] Total : 18024.63 70.41 0.00 0.00 6896.01 3120.76 48434.22 00:33:43.578 { 00:33:43.578 "results": [ 00:33:43.578 { 00:33:43.578 "job": "nvme0n1", 00:33:43.578 "core_mask": "0x2", 00:33:43.578 "workload": "randrw", 00:33:43.578 "percentage": 50, 00:33:43.578 "status": "finished", 00:33:43.578 "queue_depth": 128, 00:33:43.578 "io_size": 4096, 00:33:43.578 "runtime": 1.047067, 00:33:43.578 "iops": 18024.63452673038, 00:33:43.578 "mibps": 70.40872862004055, 00:33:43.578 "io_failed": 0, 00:33:43.578 "io_timeout": 0, 00:33:43.578 "avg_latency_us": 6896.012797319426, 00:33:43.578 "min_latency_us": 3120.7619047619046, 00:33:43.578 "max_latency_us": 48434.22476190476 00:33:43.578 } 00:33:43.578 ], 00:33:43.578 "core_count": 1 00:33:43.578 } 00:33:43.578 16:45:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:43.578 16:45:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:43.578 16:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.837 16:45:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:43.837 16:45:10 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:43.837 16:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:43.837 16:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.837 16:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.837 16:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:43.837 16:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.096 16:45:10 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:44.096 16:45:10 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:44.096 [2024-11-04 16:45:10.844710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:44.096 [2024-11-04 16:45:10.845234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1886d00 (107): Transport endpoint is not connected 00:33:44.096 [2024-11-04 16:45:10.846229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1886d00 (9): Bad file descriptor 00:33:44.096 [2024-11-04 16:45:10.847230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:44.096 [2024-11-04 16:45:10.847245] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:44.096 [2024-11-04 16:45:10.847257] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:44.096 [2024-11-04 16:45:10.847266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:44.096 request: 00:33:44.096 { 00:33:44.096 "name": "nvme0", 00:33:44.096 "trtype": "tcp", 00:33:44.096 "traddr": "127.0.0.1", 00:33:44.096 "adrfam": "ipv4", 00:33:44.096 "trsvcid": "4420", 00:33:44.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.096 "prchk_reftag": false, 00:33:44.096 "prchk_guard": false, 00:33:44.096 "hdgst": false, 00:33:44.096 "ddgst": false, 00:33:44.096 "psk": "key1", 00:33:44.096 "allow_unrecognized_csi": false, 00:33:44.096 "method": "bdev_nvme_attach_controller", 00:33:44.096 "req_id": 1 00:33:44.096 } 00:33:44.096 Got JSON-RPC error response 00:33:44.096 response: 00:33:44.096 { 00:33:44.096 "code": -5, 00:33:44.096 "message": "Input/output error" 00:33:44.096 } 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:44.096 16:45:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:44.096 16:45:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.096 16:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.355 16:45:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:44.355 16:45:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:44.355 16:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:44.355 16:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.355 16:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.355 16:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.355 16:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:44.614 16:45:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:44.614 16:45:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:44.614 16:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:44.614 16:45:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:44.614 16:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:44.872 16:45:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:44.872 16:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.872 16:45:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:45.131 16:45:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:45.131 16:45:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Qpy5vjGc6V 00:33:45.131 16:45:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.131 16:45:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.131 16:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.390 [2024-11-04 16:45:11.991215] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qpy5vjGc6V': 0100660 00:33:45.390 [2024-11-04 16:45:11.991241] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:45.390 request: 00:33:45.390 { 00:33:45.390 "name": "key0", 00:33:45.390 "path": "/tmp/tmp.Qpy5vjGc6V", 00:33:45.390 "method": "keyring_file_add_key", 00:33:45.390 "req_id": 1 00:33:45.390 } 00:33:45.390 Got JSON-RPC error response 00:33:45.390 response: 00:33:45.390 { 00:33:45.390 "code": -1, 00:33:45.390 "message": "Operation not permitted" 00:33:45.390 } 00:33:45.390 16:45:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:45.390 16:45:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:45.390 16:45:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:45.390 16:45:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:45.390 16:45:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Qpy5vjGc6V 00:33:45.390 16:45:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qpy5vjGc6V 00:33:45.390 16:45:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Qpy5vjGc6V 00:33:45.390 16:45:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.390 16:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.649 16:45:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:45.649 16:45:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.649 16:45:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.649 16:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.915 [2024-11-04 16:45:12.572752] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Qpy5vjGc6V': No such file or directory 00:33:45.915 [2024-11-04 16:45:12.572775] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:45.915 [2024-11-04 16:45:12.572790] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:45.915 [2024-11-04 16:45:12.572797] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:45.915 [2024-11-04 16:45:12.572819] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:45.915 [2024-11-04 16:45:12.572825] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:45.915 request: 00:33:45.915 { 00:33:45.915 "name": "nvme0", 00:33:45.915 "trtype": "tcp", 00:33:45.915 "traddr": "127.0.0.1", 00:33:45.915 "adrfam": "ipv4", 00:33:45.915 "trsvcid": "4420", 00:33:45.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:45.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:45.915 "prchk_reftag": false, 00:33:45.915 "prchk_guard": false, 00:33:45.915 "hdgst": false, 00:33:45.915 "ddgst": false, 00:33:45.915 "psk": "key0", 00:33:45.915 "allow_unrecognized_csi": false, 00:33:45.915 "method": "bdev_nvme_attach_controller", 00:33:45.915 "req_id": 1 00:33:45.915 } 00:33:45.915 Got JSON-RPC error response 00:33:45.915 response: 00:33:45.915 { 00:33:45.915 "code": -19, 00:33:45.915 "message": "No such device" 00:33:45.915 } 00:33:45.915 16:45:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:45.915 16:45:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:45.915 16:45:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:45.915 16:45:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:45.915 16:45:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:45.915 16:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:46.176 16:45:12 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zFocZgjilb 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:46.176 16:45:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zFocZgjilb 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zFocZgjilb 00:33:46.176 16:45:12 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.zFocZgjilb 00:33:46.176 16:45:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zFocZgjilb 00:33:46.176 16:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zFocZgjilb 00:33:46.435 16:45:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.435 nvme0n1 00:33:46.435 16:45:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:46.435 16:45:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:46.694 16:45:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:46.694 16:45:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:46.694 16:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:46.952 16:45:13 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:46.953 16:45:13 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:46.953 16:45:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.953 16:45:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:46.953 16:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.218 16:45:13 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:47.218 16:45:13 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:47.218 16:45:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:47.218 16:45:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.218 16:45:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.218 16:45:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.218 16:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.218 16:45:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:47.218 16:45:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:47.218 16:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:47.479 16:45:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:47.479 16:45:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:47.479 16:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.737 16:45:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:47.737 16:45:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zFocZgjilb 00:33:47.737 16:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zFocZgjilb 00:33:47.996 16:45:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P9DDmiJ2gS 00:33:47.996 16:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P9DDmiJ2gS 00:33:47.996 16:45:14 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:47.996 16:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.255 nvme0n1 00:33:48.255 16:45:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:48.255 16:45:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:48.514 16:45:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:48.514 "subsystems": [ 00:33:48.514 { 00:33:48.514 "subsystem": "keyring", 00:33:48.514 "config": [ 00:33:48.514 { 00:33:48.514 "method": "keyring_file_add_key", 00:33:48.514 "params": { 00:33:48.514 "name": "key0", 00:33:48.514 "path": "/tmp/tmp.zFocZgjilb" 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "keyring_file_add_key", 00:33:48.514 "params": { 00:33:48.514 "name": "key1", 00:33:48.514 "path": "/tmp/tmp.P9DDmiJ2gS" 00:33:48.514 } 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "iobuf", 00:33:48.514 "config": [ 00:33:48.514 { 00:33:48.514 "method": "iobuf_set_options", 00:33:48.514 "params": { 00:33:48.514 "small_pool_count": 8192, 00:33:48.514 "large_pool_count": 1024, 00:33:48.514 "small_bufsize": 8192, 00:33:48.514 "large_bufsize": 135168, 00:33:48.514 "enable_numa": false 00:33:48.514 } 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "sock", 00:33:48.514 "config": [ 00:33:48.514 { 00:33:48.514 "method": "sock_set_default_impl", 00:33:48.514 "params": { 00:33:48.514 "impl_name": "posix" 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "sock_impl_set_options", 00:33:48.514 "params": { 00:33:48.514 "impl_name": "ssl", 00:33:48.514 "recv_buf_size": 4096, 00:33:48.514 "send_buf_size": 4096, 00:33:48.514 "enable_recv_pipe": true, 00:33:48.514 "enable_quickack": false, 00:33:48.514 "enable_placement_id": 0, 00:33:48.514 "enable_zerocopy_send_server": true, 00:33:48.514 "enable_zerocopy_send_client": false, 00:33:48.514 "zerocopy_threshold": 0, 00:33:48.514 "tls_version": 0, 00:33:48.514 "enable_ktls": false 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "sock_impl_set_options", 00:33:48.514 "params": { 00:33:48.514 "impl_name": "posix", 00:33:48.514 "recv_buf_size": 2097152, 00:33:48.514 "send_buf_size": 2097152, 00:33:48.514 "enable_recv_pipe": true, 00:33:48.514 "enable_quickack": false, 00:33:48.514 "enable_placement_id": 0, 00:33:48.514 "enable_zerocopy_send_server": true, 00:33:48.514 "enable_zerocopy_send_client": false, 00:33:48.514 "zerocopy_threshold": 0, 00:33:48.514 "tls_version": 0, 00:33:48.514 "enable_ktls": false 00:33:48.514 } 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "vmd", 00:33:48.514 "config": [] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "accel", 00:33:48.514 "config": [ 00:33:48.514 { 00:33:48.514 "method": "accel_set_options", 00:33:48.514 "params": { 00:33:48.514 "small_cache_size": 128, 00:33:48.514 "large_cache_size": 16, 00:33:48.514 "task_count": 2048, 00:33:48.514 "sequence_count": 2048, 00:33:48.514 "buf_count": 2048 00:33:48.514 } 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "bdev", 00:33:48.514 "config": [ 00:33:48.514 { 00:33:48.514 "method": "bdev_set_options", 00:33:48.514 "params": { 00:33:48.514 "bdev_io_pool_size": 65535, 00:33:48.514 "bdev_io_cache_size": 256, 00:33:48.514 "bdev_auto_examine": true, 00:33:48.514 "iobuf_small_cache_size": 128, 00:33:48.514 "iobuf_large_cache_size": 16 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_raid_set_options", 00:33:48.514 "params": { 00:33:48.514 "process_window_size_kb": 1024, 00:33:48.514 "process_max_bandwidth_mb_sec": 0 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_iscsi_set_options", 00:33:48.514 "params": { 00:33:48.514 "timeout_sec": 30 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_nvme_set_options", 00:33:48.514 "params": { 00:33:48.514 "action_on_timeout": "none", 00:33:48.514 "timeout_us": 0, 00:33:48.514 "timeout_admin_us": 0, 00:33:48.514 "keep_alive_timeout_ms": 10000, 00:33:48.514 "arbitration_burst": 0, 00:33:48.514 "low_priority_weight": 0, 00:33:48.514 "medium_priority_weight": 0, 00:33:48.514 "high_priority_weight": 0, 00:33:48.514 "nvme_adminq_poll_period_us": 10000, 00:33:48.514 "nvme_ioq_poll_period_us": 0, 00:33:48.514 "io_queue_requests": 512, 00:33:48.514 "delay_cmd_submit": true, 00:33:48.514 "transport_retry_count": 4, 00:33:48.514 "bdev_retry_count": 3, 00:33:48.514 "transport_ack_timeout": 0, 00:33:48.514 "ctrlr_loss_timeout_sec": 0, 00:33:48.514 "reconnect_delay_sec": 0, 00:33:48.514 "fast_io_fail_timeout_sec": 0, 00:33:48.514 "disable_auto_failback": false, 00:33:48.514 "generate_uuids": false, 00:33:48.514 "transport_tos": 0, 00:33:48.514 "nvme_error_stat": false, 00:33:48.514 "rdma_srq_size": 0, 00:33:48.514 "io_path_stat": false, 00:33:48.514 "allow_accel_sequence": false, 00:33:48.514 "rdma_max_cq_size": 0, 00:33:48.514 "rdma_cm_event_timeout_ms": 0, 00:33:48.514 "dhchap_digests": [ 00:33:48.514 "sha256", 00:33:48.514 "sha384", 00:33:48.514 "sha512" 00:33:48.514 ], 00:33:48.514 "dhchap_dhgroups": [ 00:33:48.514 "null", 00:33:48.514 "ffdhe2048", 00:33:48.514 "ffdhe3072", 00:33:48.514 "ffdhe4096", 00:33:48.514 "ffdhe6144", 00:33:48.514 "ffdhe8192" 00:33:48.514 ] 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_nvme_attach_controller", 00:33:48.514 "params": { 00:33:48.514 "name": "nvme0", 00:33:48.514 "trtype": "TCP", 00:33:48.514 "adrfam": "IPv4", 00:33:48.514 "traddr": "127.0.0.1", 00:33:48.514 "trsvcid": "4420", 00:33:48.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.514 "prchk_reftag": false, 00:33:48.514 "prchk_guard": false, 00:33:48.514 "ctrlr_loss_timeout_sec": 0, 00:33:48.514 "reconnect_delay_sec": 0, 00:33:48.514 "fast_io_fail_timeout_sec": 0, 00:33:48.514 "psk": "key0", 00:33:48.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.514 "hdgst": false, 00:33:48.514 "ddgst": false, 00:33:48.514 "multipath": "multipath" 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_nvme_set_hotplug", 00:33:48.514 "params": { 00:33:48.514 "period_us": 100000, 00:33:48.514 "enable": false 00:33:48.514 } 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "method": "bdev_wait_for_examine" 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }, 00:33:48.514 { 00:33:48.514 "subsystem": "nbd", 00:33:48.514 "config": [] 00:33:48.514 } 00:33:48.514 ] 00:33:48.514 }' 00:33:48.515 16:45:15 keyring_file -- keyring/file.sh@115 -- # killprocess 3089820 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3089820 ']' 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3089820 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089820 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089820' 00:33:48.515 killing process with pid 3089820 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@973 -- # kill 3089820 00:33:48.515 Received shutdown signal, test time was about 1.000000 seconds 00:33:48.515 00:33:48.515 Latency(us) 00:33:48.515 [2024-11-04T15:45:15.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.515 [2024-11-04T15:45:15.339Z] =================================================================================================================== 00:33:48.515 [2024-11-04T15:45:15.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.515 16:45:15 keyring_file -- common/autotest_common.sh@978 -- # wait 3089820 00:33:48.773 16:45:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=3091717 00:33:48.773 16:45:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3091717 /var/tmp/bperf.sock 00:33:48.773 16:45:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:48.773 "subsystems": [ 00:33:48.773 { 00:33:48.773 "subsystem": "keyring", 00:33:48.773 "config": [ 00:33:48.773 { 00:33:48.773 "method": "keyring_file_add_key", 00:33:48.773 "params": { 00:33:48.773 "name": "key0", 00:33:48.773 "path": "/tmp/tmp.zFocZgjilb" 00:33:48.773 } 00:33:48.773 }, 00:33:48.773 { 00:33:48.773 "method": "keyring_file_add_key", 00:33:48.773 "params": { 00:33:48.773 "name": "key1", 00:33:48.773 "path": "/tmp/tmp.P9DDmiJ2gS" 00:33:48.773 } 00:33:48.773 } 00:33:48.773 ] 00:33:48.773 }, 00:33:48.773 { 00:33:48.773 "subsystem": "iobuf", 00:33:48.773 "config": [ 00:33:48.773 { 00:33:48.773 "method": "iobuf_set_options", 00:33:48.773 "params": { 00:33:48.773 "small_pool_count": 8192, 00:33:48.773 "large_pool_count": 1024, 00:33:48.774 "small_bufsize": 8192, 00:33:48.774 "large_bufsize": 135168, 00:33:48.774 "enable_numa": false 00:33:48.774 } 00:33:48.774 } 00:33:48.774 ] 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "subsystem": "sock", 00:33:48.774 "config": [ 00:33:48.774 { 00:33:48.774 "method": "sock_set_default_impl", 00:33:48.774 "params": { 00:33:48.774 "impl_name": "posix" 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "sock_impl_set_options", 00:33:48.774 "params": { 00:33:48.774 "impl_name": "ssl", 00:33:48.774 "recv_buf_size": 4096, 00:33:48.774 "send_buf_size": 4096, 00:33:48.774 "enable_recv_pipe": true, 00:33:48.774 "enable_quickack": false, 00:33:48.774 "enable_placement_id": 0, 00:33:48.774 "enable_zerocopy_send_server": true, 00:33:48.774 "enable_zerocopy_send_client": false, 00:33:48.774 "zerocopy_threshold": 0, 00:33:48.774 "tls_version": 0, 00:33:48.774 "enable_ktls": false 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "sock_impl_set_options", 00:33:48.774 "params": { 00:33:48.774 "impl_name": "posix", 00:33:48.774 "recv_buf_size": 2097152, 00:33:48.774 "send_buf_size": 2097152, 00:33:48.774 "enable_recv_pipe": true, 00:33:48.774 "enable_quickack": false, 00:33:48.774 "enable_placement_id": 0, 00:33:48.774 "enable_zerocopy_send_server": true, 00:33:48.774 "enable_zerocopy_send_client": false, 00:33:48.774 "zerocopy_threshold": 0, 00:33:48.774 "tls_version": 0, 00:33:48.774 "enable_ktls": false 00:33:48.774 } 00:33:48.774 } 00:33:48.774 ] 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "subsystem": "vmd", 00:33:48.774 "config": [] 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "subsystem": "accel", 00:33:48.774 "config": [ 00:33:48.774 { 00:33:48.774 "method": "accel_set_options", 00:33:48.774 "params": { 00:33:48.774 "small_cache_size": 128, 00:33:48.774 "large_cache_size": 16, 00:33:48.774 "task_count": 2048, 00:33:48.774 "sequence_count": 2048, 00:33:48.774 "buf_count": 2048 00:33:48.774 } 00:33:48.774 } 00:33:48.774 ] 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "subsystem": "bdev", 00:33:48.774 "config": [ 00:33:48.774 { 00:33:48.774 "method": "bdev_set_options", 00:33:48.774 "params": { 00:33:48.774 "bdev_io_pool_size": 65535, 00:33:48.774 "bdev_io_cache_size": 256, 00:33:48.774 "bdev_auto_examine": true, 00:33:48.774 "iobuf_small_cache_size": 128, 00:33:48.774 "iobuf_large_cache_size": 16 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "bdev_raid_set_options", 00:33:48.774 "params": { 00:33:48.774 "process_window_size_kb": 1024, 00:33:48.774 "process_max_bandwidth_mb_sec": 0 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "bdev_iscsi_set_options", 00:33:48.774 "params": { 00:33:48.774 "timeout_sec": 30 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "bdev_nvme_set_options", 00:33:48.774 "params": { 00:33:48.774 "action_on_timeout": "none", 00:33:48.774 "timeout_us": 0, 00:33:48.774 "timeout_admin_us": 0, 00:33:48.774 "keep_alive_timeout_ms": 10000, 00:33:48.774 "arbitration_burst": 0, 00:33:48.774 "low_priority_weight": 0, 00:33:48.774 "medium_priority_weight": 0, 00:33:48.774 "high_priority_weight": 0, 00:33:48.774 "nvme_adminq_poll_period_us": 10000, 00:33:48.774 "nvme_ioq_poll_period_us": 0, 00:33:48.774 "io_queue_requests": 512, 00:33:48.774 "delay_cmd_submit": true, 00:33:48.774 "transport_retry_count": 4, 00:33:48.774 "bdev_retry_count": 3, 00:33:48.774 "transport_ack_timeout": 0, 00:33:48.774 "ctrlr_loss_timeout_sec": 0, 00:33:48.774 "reconnect_delay_sec": 0, 00:33:48.774 "fast_io_fail_timeout_sec": 0, 00:33:48.774 "disable_auto_failback": false, 00:33:48.774 "generate_uuids": false, 00:33:48.774 "transport_tos": 0, 00:33:48.774 "nvme_error_stat": false, 00:33:48.774 "rdma_srq_size": 0, 00:33:48.774 "io_path_stat": false, 00:33:48.774 "allow_accel_sequence": false, 00:33:48.774 "rdma_max_cq_size": 0, 00:33:48.774 "rdma_cm_event_timeout_ms": 0, 00:33:48.774 "dhchap_digests": [ 00:33:48.774 "sha256", 00:33:48.774 "sha384", 00:33:48.774 "sha512" 00:33:48.774 ], 00:33:48.774 "dhchap_dhgroups": [ 00:33:48.774 "null", 00:33:48.774 "ffdhe2048", 00:33:48.774 "ffdhe3072", 00:33:48.774 "ffdhe4096", 00:33:48.774 "ffdhe6144", 00:33:48.774 "ffdhe8192" 00:33:48.774 ] 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "bdev_nvme_attach_controller", 00:33:48.774 "params": { 00:33:48.774 "name": "nvme0", 00:33:48.774 "trtype": "TCP", 00:33:48.774 "adrfam": "IPv4", 00:33:48.774 "traddr": "127.0.0.1", 00:33:48.774 "trsvcid": "4420", 00:33:48.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.774 "prchk_reftag": false, 00:33:48.774 "prchk_guard": false, 00:33:48.774 "ctrlr_loss_timeout_sec": 0, 00:33:48.774 "reconnect_delay_sec": 0, 00:33:48.774 "fast_io_fail_timeout_sec": 0, 00:33:48.774 "psk": "key0", 00:33:48.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.774 "hdgst": false, 00:33:48.774 "ddgst": false, 00:33:48.774 "multipath": "multipath" 00:33:48.774 } 00:33:48.774 }, 00:33:48.774 { 00:33:48.774 "method": "bdev_nvme_set_hotplug", 00:33:48.774 "params": { 00:33:48.774 "period_us": 100000, 00:33:48.774 "enable": false 00:33:48.774 } 00:33:48.774 }, 00:33:48.775 { 00:33:48.775 "method": "bdev_wait_for_examine" 00:33:48.775 } 00:33:48.775 ] 00:33:48.775 }, 00:33:48.775 { 00:33:48.775 "subsystem": "nbd", 00:33:48.775 "config": [] 00:33:48.775 } 00:33:48.775 ] 00:33:48.775 }' 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3091717 ']' 00:33:48.775 16:45:15 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.775 16:45:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:48.775 [2024-11-04 16:45:15.508792] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:48.775 [2024-11-04 16:45:15.508838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091717 ] 00:33:48.775 [2024-11-04 16:45:15.569146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.033 [2024-11-04 16:45:15.612467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.033 [2024-11-04 16:45:15.771663] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:49.599 16:45:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.599 16:45:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:49.599 16:45:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:49.599 16:45:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:49.599 16:45:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:49.858 16:45:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:49.858 16:45:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:49.858 16:45:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:49.858 16:45:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:49.858 16:45:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:49.858 16:45:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:49.858 16:45:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.116 16:45:16 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:50.116 16:45:16 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:50.116 16:45:16 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:50.116 16:45:16 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:50.116 16:45:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:50.116 16:45:16 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:50.375 16:45:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:50.375 16:45:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:50.375 16:45:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.zFocZgjilb /tmp/tmp.P9DDmiJ2gS 00:33:50.375 16:45:17 keyring_file -- keyring/file.sh@20 -- # killprocess 3091717 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3091717 ']' 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3091717 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091717 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091717' 00:33:50.375 killing process with pid 3091717 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@973 -- # kill 3091717 00:33:50.375 Received shutdown signal, test time was about 1.000000 seconds 00:33:50.375 00:33:50.375 Latency(us) 00:33:50.375 [2024-11-04T15:45:17.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.375 [2024-11-04T15:45:17.199Z] =================================================================================================================== 00:33:50.375 [2024-11-04T15:45:17.199Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:50.375 16:45:17 keyring_file -- common/autotest_common.sh@978 -- # wait 3091717 00:33:50.634 16:45:17 keyring_file -- keyring/file.sh@21 -- # killprocess 3089807 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3089807 ']' 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3089807 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089807 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089807' 00:33:50.634 killing process with pid 3089807 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@973 -- # kill 3089807 00:33:50.634 16:45:17 keyring_file -- common/autotest_common.sh@978 -- # wait 3089807 00:33:50.892 00:33:50.892 real 0m11.502s 00:33:50.892 user 0m28.506s 00:33:50.892 sys 0m2.642s 00:33:50.892 16:45:17 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.892 16:45:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:50.892 ************************************ 00:33:50.892 END TEST keyring_file 00:33:50.892 ************************************ 00:33:50.892 16:45:17 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:33:50.892 16:45:17 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:50.892 16:45:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:50.892 16:45:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.892 16:45:17 -- common/autotest_common.sh@10 -- # set +x 00:33:50.892 ************************************ 00:33:50.892 START TEST keyring_linux 00:33:50.892 ************************************ 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:51.151 Joined session keyring: 836552268 00:33:51.151 * Looking for test storage... 00:33:51.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.151 16:45:17 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.151 --rc genhtml_branch_coverage=1 00:33:51.151 --rc genhtml_function_coverage=1 00:33:51.151 --rc genhtml_legend=1 00:33:51.151 --rc geninfo_all_blocks=1 00:33:51.151 --rc geninfo_unexecuted_blocks=1 00:33:51.151 00:33:51.151 ' 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.151 --rc genhtml_branch_coverage=1 00:33:51.151 --rc genhtml_function_coverage=1 00:33:51.151 --rc genhtml_legend=1 00:33:51.151 --rc geninfo_all_blocks=1 00:33:51.151 --rc geninfo_unexecuted_blocks=1 00:33:51.151 00:33:51.151 ' 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.151 --rc genhtml_branch_coverage=1 00:33:51.151 --rc genhtml_function_coverage=1 00:33:51.151 --rc genhtml_legend=1 00:33:51.151 --rc geninfo_all_blocks=1 00:33:51.151 --rc geninfo_unexecuted_blocks=1 00:33:51.151 00:33:51.151 ' 00:33:51.151 16:45:17 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.152 --rc genhtml_branch_coverage=1 00:33:51.152 --rc genhtml_function_coverage=1 00:33:51.152 --rc genhtml_legend=1 00:33:51.152 --rc geninfo_all_blocks=1 00:33:51.152 --rc geninfo_unexecuted_blocks=1 00:33:51.152 00:33:51.152 ' 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.152 16:45:17 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.152 16:45:17 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.152 16:45:17 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.152 16:45:17 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.152 16:45:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.152 16:45:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.152 16:45:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.152 16:45:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:51.152 16:45:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:51.152 /tmp/:spdk-test:key0 00:33:51.152 16:45:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:51.152 16:45:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:51.152 16:45:17 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:51.411 16:45:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:51.411 16:45:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:51.411 /tmp/:spdk-test:key1 00:33:51.411 16:45:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3092269 00:33:51.411 16:45:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3092269 00:33:51.411 16:45:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3092269 ']' 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.411 16:45:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:51.411 [2024-11-04 16:45:18.067743] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:51.411 [2024-11-04 16:45:18.067792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092269 ] 00:33:51.411 [2024-11-04 16:45:18.129495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.411 [2024-11-04 16:45:18.171155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:51.670 [2024-11-04 16:45:18.384439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.670 null0 00:33:51.670 [2024-11-04 16:45:18.416499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:51.670 [2024-11-04 16:45:18.416871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:51.670 796192946 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:51.670 248217579 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3092281 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3092281 /var/tmp/bperf.sock 00:33:51.670 16:45:18 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3092281 ']' 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.670 16:45:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:51.670 [2024-11-04 16:45:18.487083] Starting SPDK v25.01-pre git sha1 018f47196 / DPDK 24.03.0 initialization... 00:33:51.670 [2024-11-04 16:45:18.487123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092281 ] 00:33:51.928 [2024-11-04 16:45:18.549010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.928 [2024-11-04 16:45:18.589249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.928 16:45:18 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.928 16:45:18 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:51.928 16:45:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:51.928 16:45:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:52.187 16:45:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:52.187 16:45:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:52.445 16:45:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:52.445 16:45:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:52.445 [2024-11-04 16:45:19.253754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:52.704 nvme0n1 00:33:52.704 16:45:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:52.704 16:45:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:52.704 16:45:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:52.704 16:45:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:52.704 16:45:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:52.704 16:45:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.962 16:45:19 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:52.962 16:45:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:52.962 16:45:19 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:52.963 16:45:19 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.963 16:45:19 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:52.963 16:45:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@25 -- # sn=796192946 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 796192946 == \7\9\6\1\9\2\9\4\6 ]] 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 796192946 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:52.963 16:45:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:53.221 Running I/O for 1 seconds... 00:33:54.156 20168.00 IOPS, 78.78 MiB/s 00:33:54.156 Latency(us) 00:33:54.156 [2024-11-04T15:45:20.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.156 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:54.156 nvme0n1 : 1.01 20165.15 78.77 0.00 0.00 6324.90 5055.63 12358.22 00:33:54.156 [2024-11-04T15:45:20.980Z] =================================================================================================================== 00:33:54.156 [2024-11-04T15:45:20.980Z] Total : 20165.15 78.77 0.00 0.00 6324.90 5055.63 12358.22 00:33:54.156 { 00:33:54.156 "results": [ 00:33:54.156 { 00:33:54.156 "job": "nvme0n1", 00:33:54.156 "core_mask": "0x2", 00:33:54.156 "workload": "randread", 00:33:54.156 "status": "finished", 00:33:54.156 "queue_depth": 128, 00:33:54.156 "io_size": 4096, 00:33:54.156 "runtime": 1.006489, 00:33:54.156 "iops": 20165.148352341654, 00:33:54.156 "mibps": 78.77011075133458, 00:33:54.156 "io_failed": 0, 00:33:54.156 "io_timeout": 0, 00:33:54.156 "avg_latency_us": 6324.896864312931, 00:33:54.156 "min_latency_us": 5055.634285714285, 00:33:54.156 "max_latency_us": 12358.217142857144 00:33:54.156 } 00:33:54.156 ], 00:33:54.156 "core_count": 1 00:33:54.156 } 00:33:54.156 16:45:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:54.156 16:45:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:54.415 16:45:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:54.415 16:45:21 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:54.415 16:45:21 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:54.415 16:45:21 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:54.415 16:45:21 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:54.674 16:45:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:54.674 [2024-11-04 16:45:21.415432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:54.674 [2024-11-04 16:45:21.415722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cba70 (107): Transport endpoint is not connected 00:33:54.674 [2024-11-04 16:45:21.416717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cba70 (9): Bad file descriptor 00:33:54.674 [2024-11-04 16:45:21.417718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:54.674 [2024-11-04 16:45:21.417729] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:54.674 [2024-11-04 16:45:21.417735] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:54.674 [2024-11-04 16:45:21.417744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:54.674 request: 00:33:54.674 { 00:33:54.674 "name": "nvme0", 00:33:54.674 "trtype": "tcp", 00:33:54.674 "traddr": "127.0.0.1", 00:33:54.674 "adrfam": "ipv4", 00:33:54.674 "trsvcid": "4420", 00:33:54.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.674 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.674 "prchk_reftag": false, 00:33:54.674 "prchk_guard": false, 00:33:54.674 "hdgst": false, 00:33:54.674 "ddgst": false, 00:33:54.674 "psk": ":spdk-test:key1", 00:33:54.674 "allow_unrecognized_csi": false, 00:33:54.674 "method": "bdev_nvme_attach_controller", 00:33:54.674 "req_id": 1 00:33:54.674 } 00:33:54.674 Got JSON-RPC error response 00:33:54.674 response: 00:33:54.674 { 00:33:54.674 "code": -5, 00:33:54.674 "message": "Input/output error" 00:33:54.674 } 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:54.674 16:45:21 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@33 -- # sn=796192946 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 796192946 00:33:54.674 1 links removed 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:54.674 16:45:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:54.675 16:45:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:54.675 16:45:21 keyring_linux -- keyring/linux.sh@33 -- # sn=248217579 00:33:54.675 16:45:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 248217579 00:33:54.675 1 links removed 00:33:54.675 16:45:21 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3092281 00:33:54.675 16:45:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3092281 ']' 00:33:54.675 16:45:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3092281 00:33:54.675 16:45:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:54.675 16:45:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.675 16:45:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3092281 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3092281' 00:33:54.934 killing process with pid 3092281 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 3092281 00:33:54.934 Received shutdown signal, test time was about 1.000000 seconds 00:33:54.934 00:33:54.934 Latency(us) 00:33:54.934 [2024-11-04T15:45:21.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.934 [2024-11-04T15:45:21.758Z] =================================================================================================================== 00:33:54.934 [2024-11-04T15:45:21.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 3092281 00:33:54.934 16:45:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3092269 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3092269 ']' 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3092269 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3092269 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3092269' 00:33:54.934 killing process with pid 3092269 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 3092269 00:33:54.934 16:45:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 3092269 00:33:55.193 00:33:55.193 real 0m4.288s 00:33:55.193 user 0m8.000s 00:33:55.193 sys 0m1.444s 00:33:55.193 16:45:22 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.193 16:45:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:55.193 ************************************ 00:33:55.193 END TEST keyring_linux 00:33:55.193 ************************************ 00:33:55.451 16:45:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:55.451 16:45:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:55.451 16:45:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:55.451 16:45:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:55.451 16:45:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:55.451 16:45:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:55.451 16:45:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:55.451 16:45:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.451 16:45:22 -- common/autotest_common.sh@10 -- # set +x 00:33:55.451 16:45:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:55.451 16:45:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:55.451 16:45:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:55.451 16:45:22 -- common/autotest_common.sh@10 -- # set +x 00:34:00.720 INFO: APP EXITING 00:34:00.720 INFO: killing all VMs 00:34:00.720 INFO: killing vhost app 00:34:00.720 INFO: EXIT DONE 00:34:02.098 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:02.098 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:02.099 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:02.437 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:05.000 Cleaning 00:34:05.000 Removing: /var/run/dpdk/spdk0/config 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:05.000 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:05.000 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:05.000 Removing: /var/run/dpdk/spdk1/config 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:05.000 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:05.000 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:05.000 Removing: /var/run/dpdk/spdk2/config 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:05.000 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:05.000 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:05.000 Removing: /var/run/dpdk/spdk3/config 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:05.000 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:05.000 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:05.000 Removing: /var/run/dpdk/spdk4/config 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:05.000 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:05.000 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:05.000 Removing: /dev/shm/bdev_svc_trace.1 00:34:05.000 Removing: /dev/shm/nvmf_trace.0 00:34:05.000 Removing: /dev/shm/spdk_tgt_trace.pid2618339 00:34:05.000 Removing: /var/run/dpdk/spdk0 00:34:05.000 Removing: /var/run/dpdk/spdk1 00:34:05.000 Removing: /var/run/dpdk/spdk2 00:34:05.000 Removing: /var/run/dpdk/spdk3 00:34:05.000 Removing: /var/run/dpdk/spdk4 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2615971 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2617035 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2618339 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2618876 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2619842 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2619943 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2620920 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2621076 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2621286 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2623019 00:34:05.000 Removing: /var/run/dpdk/spdk_pid2624517 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2624829 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2625185 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2625475 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2625604 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2625861 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2626108 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2626394 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2627519 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2630538 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2630738 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2630882 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2631055 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2631357 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2631560 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2631836 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2631953 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2632276 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2632321 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2632579 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2632595 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2633156 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2633368 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2633700 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2637401 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2641523 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2651686 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2652377 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2656602 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2656918 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2661182 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2667064 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2669672 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2680162 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2689071 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2690900 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2691830 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2708471 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2712556 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2757818 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2762994 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2768765 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2775377 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2775381 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2776678 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2777513 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2778295 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2778986 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2779152 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2779438 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2779450 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2779458 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2780369 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2781280 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2782204 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2782670 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2782676 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2782956 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2784108 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2785127 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2793231 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2821647 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2825989 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2827771 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2829423 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2829636 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2829799 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2829887 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2830387 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2832221 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2832981 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2833482 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2835589 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2836073 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2836671 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2840848 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2846228 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2846229 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2846230 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2850135 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2858667 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2862685 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2868686 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2869979 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2871301 00:34:05.258 Removing: /var/run/dpdk/spdk_pid2872646 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2877320 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2881645 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2885455 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2892812 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2892818 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2897453 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2897674 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2897787 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2898231 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2898329 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2903061 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2903888 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2908356 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2910893 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2916281 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2921615 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2930177 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2937147 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2937157 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2956242 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2956717 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2957367 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2957879 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2958613 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2959093 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2959565 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2960252 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2964289 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2964516 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2970537 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2970641 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2976061 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2980117 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2989992 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2990533 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2994563 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2994939 00:34:05.516 Removing: /var/run/dpdk/spdk_pid2999539 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3004998 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3007561 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3017497 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3026163 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3027766 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3028679 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3044568 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3048737 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3051578 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3059305 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3059310 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3064342 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3066309 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3068278 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3069535 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3071506 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3072578 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3081525 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3081986 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3082448 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3084726 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3085280 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3085837 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3089807 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3089820 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3091717 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3092269 00:34:05.516 Removing: /var/run/dpdk/spdk_pid3092281 00:34:05.516 Clean 00:34:05.774 16:45:32 -- common/autotest_common.sh@1453 -- # return 0 00:34:05.774 16:45:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:05.774 16:45:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.774 16:45:32 -- common/autotest_common.sh@10 -- # set +x 00:34:05.774 16:45:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:05.774 16:45:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.774 16:45:32 -- common/autotest_common.sh@10 -- # set +x 00:34:05.774 16:45:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:05.774 16:45:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:05.774 16:45:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:05.774 16:45:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:05.774 16:45:32 -- spdk/autotest.sh@398 -- # hostname 00:34:05.774 16:45:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:05.774 geninfo: WARNING: invalid characters removed from testname! 00:34:27.692 16:45:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:29.065 16:45:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.964 16:45:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.865 16:45:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:34.766 16:46:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:36.663 16:46:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:38.563 16:46:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:38.563 16:46:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:38.563 16:46:05 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:38.563 16:46:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:38.563 16:46:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:38.563 16:46:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:38.563 + [[ -n 2539174 ]] 00:34:38.563 + sudo kill 2539174 00:34:38.572 [Pipeline] } 00:34:38.588 [Pipeline] // stage 00:34:38.593 [Pipeline] } 00:34:38.607 [Pipeline] // timeout 00:34:38.613 [Pipeline] } 00:34:38.627 [Pipeline] // catchError 00:34:38.631 [Pipeline] } 00:34:38.644 [Pipeline] // wrap 00:34:38.649 [Pipeline] } 00:34:38.660 [Pipeline] // catchError 00:34:38.669 [Pipeline] stage 00:34:38.671 [Pipeline] { (Epilogue) 00:34:38.684 [Pipeline] catchError 00:34:38.686 [Pipeline] { 00:34:38.698 [Pipeline] echo 00:34:38.700 Cleanup processes 00:34:38.706 [Pipeline] sh 00:34:38.991 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:38.991 3102623 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.004 [Pipeline] sh 00:34:39.287 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.287 ++ grep -v 'sudo pgrep' 00:34:39.287 ++ awk '{print $1}' 00:34:39.287 + sudo kill -9 00:34:39.287 + true 00:34:39.298 [Pipeline] sh 00:34:39.580 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:51.789 [Pipeline] sh 00:34:52.071 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:52.071 Artifacts sizes are good 00:34:52.085 [Pipeline] archiveArtifacts 00:34:52.092 Archiving artifacts 00:34:52.214 [Pipeline] sh 00:34:52.500 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:52.513 [Pipeline] cleanWs 00:34:52.523 [WS-CLEANUP] Deleting project workspace... 00:34:52.523 [WS-CLEANUP] Deferred wipeout is used... 00:34:52.529 [WS-CLEANUP] done 00:34:52.531 [Pipeline] } 00:34:52.548 [Pipeline] // catchError 00:34:52.560 [Pipeline] sh 00:34:52.853 + logger -p user.info -t JENKINS-CI 00:34:52.873 [Pipeline] } 00:34:52.882 [Pipeline] // stage 00:34:52.886 [Pipeline] } 00:34:52.897 [Pipeline] // node 00:34:52.901 [Pipeline] End of Pipeline 00:34:52.941 Finished: SUCCESS